Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Unix Linux

Exploiting Wildcards On Linux/Unix 215

An anonymous reader writes: DefenseCode researcher Leon Juranic found security issues related to using wildcards in Unix commands. The topic has been talked about in the past on the Full Disclosure mailing list, where some people saw this more as a feature than as a bug. There are clearly a number of potential security issues surrounding this, so Mr. Juranic provided five actual exploitation examples that stress the risks accompanying the practice of using the * wildcard with Linux/Unix commands. The issue can be manifested by using specific options in chown, tar, rsync etc. By using specially crafted filenames, an attacker can inject arbitrary arguments to shell commands run by other users — root as well.
This discussion has been archived. No new comments can be posted.

Exploiting Wildcards On Linux/Unix

Comments Filter:
  • Question... -- ? (Score:5, Informative)

    by beh ( 4759 ) * on Friday June 27, 2014 @09:14AM (#47332419)

    Who does NOT use -- in their scripts, if they're safety conscious?

            rm -i -- *

    Problem solved?

    Normal programs should stop processing options after a (standalone) "--" and take everything following it as regular parameters. getopt and similar libraries handle this automatically.

    I really wouldn't class the "use of wildcards" as a security risk - the security risk is the developer that doesn't know what he's doing.
    Would command line handling be a security risk, if someone would add a --superuser-rm option to his code and execute "rm -rf /" as root immediately afterwards?

    • drop tables *

      • # rm -rf *.*

        (I actually saw a Windows-centric guy do that once as root while he was learning Linux. The look of horror on his face as the entire box began to delete itself was hilarious...)

        • Unless you were, say, in /etc, this wouldn't really do much harm. The only file containing a . in my / is initrd.img, which even if it weren't a symlink, is easy to to regenerate.
          • Re: (Score:3, Insightful)

            by Anonymous Coward

            Depends on the version of Unix that you're using. There a lot of non GNU variants of rm that will happily resolve .. and traverse it. In effect it became a rm -rf /

          • Most wouldn't run it in / , but rather in some odd subdirectory, in which case it will stumble across "..", work its way up, then find "." - whereupon most *nixes will immediately devour ./usr , ./etc, ./var, and suchlike.

            I think they finally fixed this behavior in recent distro releases, but the incident in question happened in 2006-ish; can't recall exactly if it was an RHEL 2.x box or an older FreeBSD one, but it was hella fun to watch.

      • Oh, thought of another one, just to mess with other admins:

        # chattr +i /*.*

        • by Lorens ( 597774 )

          Oh, thought of another one, just to mess with other admins:

          # chattr +i /*.*

          Wouldn't notice until kernel upgrade time:

          $ ls -d /*.* /initrd.img /initrd.img.old /vmlinuz.old

    • Question... -- ? (Score:4, Informative)

      by Marneus68 ( 2329096 ) on Friday June 27, 2014 @09:24AM (#47332489)
      After years of using command line programs daily I never heard of -- before today. It was never brought up in school, nor did I see any specific thread / blog post on the subject. So to answer your question, I don't. I've never heard about that before. Where did you learn about that ?
      • Where did you learn about that ?

        RTFM[anpages.] It's literally in the system documentation. Granted, not all commands have such an option. Knowing which do is your responsibility. Arguably, all commands should have such an option.

      • Re:Question... -- ? (Score:5, Interesting)

        by locofungus ( 179280 ) on Friday June 27, 2014 @09:38AM (#47332621)

        Back in the (iirc) bsd 4.2 days, su was a suid shell script - at least on the machines I was using at the time.

        Setup a symlink to su called -i

        $ -i
        # rm -- -i
        #

        There was a security bug handling suid shell scripts where the user was changed and then the #! interpreter was run, i.e. /bin/sh -i

        and you got an interactive root shell :-)

        Was very informative when the 'script kiddies' (although I don't recall that term existing in those days) had symlinks called -i in their home directory that they didn't know how to delete ;-)

      • Re: (Score:2, Informative)

        by PIBM ( 588930 )

        Are you running commands, with root on stuff you don't know where it comes from ?

        If you absolutely have to go run the query in the folder into which someone has upload/file creation, then at least use ./* if the tool doesn't support --.

        This is not an issue if you work recursively on the directory holding whatever they want, which should cover most of the situations. A bad admin can run queries that are as dangerous or worst pretty fast!

        Always be cautious when running someone else shell script, that's even m

        • Always be cautious when running someone else shell script, that's even more dangerous..

          If you aren't capable of auditing an untrusted-sourced script before you run/use it, then don't run it or use it.

          Seriously.

          I grab (cut+paste) script bits from online when I'm lazy, but I always take the time to audit the chunk of text, and insure that it doesn't do anything dumb before I use/incorporate it. Doing this gives me two benefits: First, I can double-plus insure that it doesn't do anything I don't want it to, and second, I learn a bit about the person who wrote it (and in some cases, I discover a

      • by TheCarp ( 96830 )

        About as I would expect. A fellow admin and I were recently talking about the disown command and how after more than a decade on the job we are still finding out about commands that have existed since we were kids running around on the playground.

        Most admins find out about -- after they run into a situation where they accidentally created a file with a name like "-f" go ahead, try and delete a file named "-f" any other way.

        It works in many unix commands, actually "--" is a very common "end of options" signa

        • by Bazman ( 4849 )

          Any other way? How about this way:

          rm ./-f

        • Re:Question... -- ? (Score:4, Informative)

          by fnj ( 64210 ) on Friday June 27, 2014 @10:36AM (#47333191)

          Most admins find out about -- after they run into a situation where they accidentally created a file with a name like "-f" go ahead, try and delete a file named "-f" any other way.

          rm ./-f
          Is the most dead-simple way of doing it and is portable to non-gnu-based systems, although even BSD has the double-dash option nowadays.

          And there is always the old standby of insulating the operation from bash command line expansion:
          perl -e 'unlink "-f";'

          You could also, within a minute or so, write and compile a literally two-line C program to remove it. I don't understand the mystery.
          #include <unistd.h>
          int main() { unlink("-f"); }

      • Can't speak for the OP, but I once accidentally created a file name '-r'. Trying to remove it eventually led me to discover '--', but I don't expect most people to know about it.

      • by beh ( 4759 ) *

        Sorry, if that appears harsh - but sometimes it pays to read manuals and try and understand what you're doing and how the stuff works.

        I don't exactly remember when I learnt it first - but I DID already know when I also got told about it during my CS BSc degree course (probably 1st or 2nd year - which would place it about 1998-2000).

        If you need to code stuff "securely", you need to understand how stuff works -- I don't think of myself as a particularly apt security coder or hacker - I mainly specialise on in

        • Re: (Score:3, Interesting)

          by Anonymous Coward

          That is B.S.

          If someone reads that, they do not think security. They think it is an escape to deal with files that start with - and that is where they file it in their head. You also have to understand about '*' and think about how the two would work together.

          This is exactly why computer code is insecure.

          • by beh ( 4759 ) *

            I did not say purely that reading about -- should tell you about security alone. IIRC my original incident with -- was a colleague setting me a teaser on trying to find out how to delete a file called '-f'; and me first having to figure out, that 'rm ??' reads like delete all files with two character filenames (of which there was only the '-f' file), but not seeing that the ?? actually gets expanded to all the two character filenames by the shell; rm never sees the '??' but instead only sees the filenames -

      • Uh, it's only been around since the mid-90s if not earlier.

      • This "--" is only for some commands that decide to implement it. It basically means "nothing after this is an argument", and it's common with GNU utilities (probably the default) but is more rare in other tools and Unix variants.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      So why would the expected method not be the default? This is exactly how security problems are born.

    • While that is indeed the solution, it is also true that it is too easy to forget. Perhaps one could modify all commands to require the use of the "--" separator, or to warn if it's not present, at least if some environment variable is set. That could be very helpful for people trying to write more secure code.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      "the security risk is the developer that doesn't know what he's doing."

      Not the hacker who does know what he is doing.

    • by godrik ( 1287354 )

      Nop, you can not just use --. because many commands do not understand --

      Here is an article by dwheeler (a frequent slashdotter; often cited for his technique countering the trusting trust problem) about filenames.
      http://www.dwheeler.com/essays... [dwheeler.com]

      I believe he is mostly right. We should move to file systems that do not allow "stupid" names and be done with it.

      • by tlhIngan ( 30335 )

        Nop, you can not just use --. because many commands do not understand --

        "--" is understood by getopt(), a library function to parse command line options. getopt() can simply print a warning out stderr when it's called and comes across un-escaped operatnds.

        Bonus points if it defaults to printing errors when it's run in a non-TTY environment (e.g., shell script).

        Extra credit for programs like rm that pre-scan their arguments to see if they match files - so if you passed in "-r" in the option area without a --

    • Because a lot of options take a filename as an argument, such as the "-o" option of any compiler or linker. How do you pass one of those without it getting expanded as a wildcard?
    • The real bug here is the same as in SQL injection attacks: A failure to safely distinguish between program and data!

      I.e. when doing chown usr:grp *.php, the wildcard globbing should escape any special letters, particularly including white space and wild card characters.

      This is the same idea as when you use prepare(... ?,?) on any sql statement with replaceable parameters, then execute() with the relevant dynamic values.

      Terje

  • by Anonymous Coward on Friday June 27, 2014 @09:17AM (#47332439)

    posting the answer to this useless story that was posted to FD

    Date: Thu, 26 Jun 2014 12:55:42 -0700
    From: Michal Zalewski

    > We wanted to inform all major *nix distributions via our responsible
    > disclosure policy about this problem before posting it

    I'm not sure how to put it mildly, but I think you might have been
    scooped on this some 1-2 decades ago...

    Off the top of my head, there's a rant about this behavior in "The
    Unix-Haters Handbook", and there are several highly detailed articles
    by David Wheeler published over the years (e.g.,
    http://www.dwheeler.com/essays/filenames-in-shell.html).

    Yup, it's a counterintuitive behavior that leads to security problems.
    The odds of changing the semantics at this point are very slim. Other
    operating systems have their own idiosyncrasies in this area - for
    example, Windows it not a lot better with parameter splitting and
    special filenames. /mz

    • Re: (Score:3, Interesting)

      by gweihir ( 88907 )

      It may be counter-intuitive for people that have very little experience with a UNIX commandline. All others did run in the issue at some time that they could create, but not easily delete a filename "-v" or the like. But people with very little UNIX commandline experience have zero business writing security critical software that uses the commandline tools!

      This is a complete non-issue. Incompetent people will usually screw security up and this is just one of the countless ways to do it.

    • I just want to state the UNIX does not behave like that, it's GNU who does.

  • Is the wildcard expanded by the shell in PowerShell?
    • I assume from this article that Linux replaces * with filenames before the command sees it. AFAIK DOS/Windows the wildcard is handled by each specific command. dir * displays the same listing as just plain dir does, while passing dir a bunch of directory names will display the contents of those directories (like ls does... I guess that explains that behavior! It always confused me). PowerShell, at least as far as Get-ChildItem, seems to work the same way as dir (except it does not take multiple director
    • What I like about PowerShell is that it runs on Windows. Windows really sucks at command line.
      • Well, yeah. The object-oriented approach is pretty clever for example. Do not have to sweat over spaces in file names breaking your scripts and things like that.
    • Is the wildcard expanded by the shell in PowerShell?

      No. This class of attacks will not work against PowerShell (nor for plain old DOS for that matter). The problem is the combination of text-centric shell scripting and shell expanded wildcards.

  • by gweihir ( 88907 ) on Friday June 27, 2014 @09:33AM (#47332581)

    Really, this is well-known, non-surprising and will not happen to anybody with a security mind-set. Of course it will happen in practice. But there are quite a few other variants of code injection (which this basically is) that the same people will get wrong. Complete input sanitisation is mandatory if you need security. I mean, even very early Perl-based CGI mechanisms added taint-checking specifically for things like this. If people cannot be bothered to find out how to pass parameters from an untrusted source securely, then they cannot be bothered to write secure software.

    The fix is not to change the commands. The fix is to exchange people that mess things this elementary up against people that actually understand security. Sorry, you cannot have developers that are cheap and competent at the same time and even less so when security is important.

    • by Anonymous Coward

      Wake up. Not everyone is a developer. Not everyone has even 2 minutes of unix philosophy.

      My Users are scientists, and they get to trash their home space here. These types of issues are most likely to happen when they are writing a script and it makes files for what should have been options.

      My job isn't to teach them unix, it's to keep them happy and productive. They make mistakes, I clean them up and help them through the frustration of things going wrong.

  • by quietwalker ( 969769 ) <pdughi@gmail.com> on Friday June 27, 2014 @09:37AM (#47332613)

    I remember reading about this in the 1991 release of "Practical Internet and Unix Security," from O'Reilly back in 1991. I'm pretty sure they even gave examples. They also laid out a number of suggestions to mitigate risk, including not specifying the current path, ".", in the root user's path so they must explicitly type the location of an executable script, and so on.

    They also pointed out that some well-behaved shells eliminate certain ease-of-use-but-exploitable features when it detects that a privileged user is running it, and even on systems where that's not the standard, the default .bashrc or equivalent files often set up aliases for common commands that disable features like wildcard matching, or color codes (which could be used if you're very tricky, to match a filename color to the background color of the screen, among other things), the path restriction listed above, and many many others.

    It's really hard to secure shell accounts on systems, no matter how you try. Is this article just proof that the current generation of unix admins is rediscovering this? Should I be shaking my fist and telling the kids to get off my lawn? This was old news 2 over decades ago.

    • To turn off wildcard expansion permanently: Utilize the ânoglobeâ(TM) option. To set the ânoglobeâ(TM) option, please execute the below command at the BASH shell: bash-prompt#>set -o noglobe; More often, the requirement is to turn off Path name expansion. This is especially useful if a wildcard is part of an argument to a program. http://ksearch.wordpress.com/2... [wordpress.com] Use set â"f in such cases. Execute âset â"fâ(TM) as below bash-prompt#>set â"f; To rese
  • I understand why this works and I understand the need to sanitize user input, but this is dumb. Even if there are workarounds. It's obvious what the intent of "tar cf archive.tar *" is suppose to be, it shouldn't be treating file names as additional arguments. Anyone actively using this "feature" for anything legitimate is dumb too.

    This seems very similar to the whole "we need some other language than C" argument. Sure, you *can* make secure code with zero overflow vulnerabilities, but damn near all sof

    • by itzly ( 3699663 )

      It's obvious what the intent of "tar cf archive.tar *" is suppose to be, it shouldn't be treating file names as additional arguments

      The problem is that the * expansion is done by the shell, and the shell doesn't know the difference between file names and arguments.

      • no the problem is with gnu tar...

        it sees cf file file file --whatever
        and it usese --whatever as option

        on unix (not linux) it also sees cf file file --whatever
        and tries to put the file "--whatever" into the tar archive.

        linux (gnu) broke stuff which worked for ages in unix world

      • by AC-x ( 735297 )

        The problem is that the * expansion is done by the shell, and the shell doesn't know the difference between file names and arguments.

        But it could very easily make them explicit filenames by prefixing them with ./, and I can't think of anything that would break.

    • by DarkOx ( 621550 )

      The problem is until we are willing to essentially throw everything out, baby bath water and all, we can't really fix it.

      Take the C argument. The issue is really again one of input validation, buffer over flows happen ultimately because of of problems with input validation. Yes a language that enforces bounds checking everywhere and takes memory management out of the hands of the programer solves the specific problem of the buffer overflow; it does not make it much more secure though in the grand scheme o

  • by jones_supa ( 887896 ) on Friday June 27, 2014 @10:00AM (#47332829)

    Systems where user data can accidentally get mixed in control commands are dangerous. In addition to this shell trick, another example would be HTML, where you have to be careful to not let raw HTML data through your guestbook messages so that visitors can't inject HTML into the messages.

    With competent and careful system administrators you can avoid problems, but it's still kind of a fragile design in my opinion.

  • This would seem a problem with universal implications, and one that largely depends on local access by the malefactor.
  • by gb7djk ( 857694 ) * on Friday June 27, 2014 @10:11AM (#47332943) Homepage

    Er.. most of the exploits are only possible if one is root and/or the directory is writable for some other user (e.g. leon in this case).

    Since one is root, one can do anything anyway so why bother with all this misdirection? If someone leaves world writable directories lying around (especially without the sticky bit set), then they deserve everything they get. Or is this some kind of "trap the (completely) unwary sysadmin" wake up call? If I see some strange named file (especially if I know I didn't put it there) I would investigate very, very carefully what is going on. I can't be alone in this - surely?

    • In addition, they're only possible to use as a privilege escalation exploit, not to gain entrance into the system in the first place. So this is mainly only useable on multi-user systems, of which there aren't very many anymore.
    • No, you don't need root access. Let's say that you are in a group called "students", which has R/W permission for /work/students and all its subdirectories. You are in directory /work/students, and you want to remove all the files from that directory. Now some wiseass has created a file called "-rf" and you unknowingly end up destroying all the subdirectories too. This happens because the shell expanded the asterisk, instead of the "rm" program. The "rm" program happily interprets the "-rf" as an argument,
      • by gb7djk ( 857694 ) *
        But, but (splutter), since you have group access, then you can do rm -rf * anyway, so why bother with the cleverness? Especially as you have to leave a file there with your username handily stamped it ready for some more clueful person to catch you.
        • Yeah, you have a point there. But there's all kinds of scenarios. Maybe an upset employee leaves such a bomb somewhere and it is triggered after he has left the company.
    • by AC-x ( 735297 ) on Friday June 27, 2014 @11:52AM (#47334067)

      Since one is root, one can do anything anyway so why bother with all this misdirection?

      Because you can trick a more privileged user into executing commands for you by writing files into your own folder. Most the examples given were of admin housekeeping tasks run against a user writeable folder.

    • Er.. most of the exploits are only possible if one is root and/or the directory is writable for some other user (e.g. leon in this case).

      Since one is root, one can do anything anyway so why bother with all this misdirection? If someone leaves world writable directories lying around (especially without the sticky bit set), then they deserve everything they get. Or is this some kind of "trap the (completely) unwary sysadmin" wake up call? If I see some strange named file (especially if I know I didn't put it there) I would investigate very, very carefully what is going on. I can't be alone in this - surely?

      The point is that this can be used to trick a root user into issuing what he believes is a safe command. The combination of the text-reinterpreting shell and specially crafted file names combines into a seemingly innocent command ending up allowing the attacker (the creator of the specially crafted file) root access on the system.

      It doesn't help that some (on the surface) idempotent commands like find packs a number of dangerous options that can be used to execute shell scripts, commands or remove files.

  • Unpacking 'shar' archives via 'sh' considered dangerous.

  • by Gunstick ( 312804 ) on Friday June 27, 2014 @10:17AM (#47332993) Homepage

    This is because the linux commands do not respect what the manual says:
    man rm...

    rm [OPTION]... FILE...

    but in realitiy it's rather:

    rm [OTION|FILE]...

    whereas on other unix systems it works as expected, first the options, then the arguments
    HP-UX
    rm *
    rm: DIR1 directory

    Solaris
    rm *
    rm: DIR1 directory

    So screw the GNU tools, they mess things up for the "old unix sysadmins"

    Here is a nice linux/unix trap:
    x=a
    y="rm z"
    f=$x $y

    So you expect f to contain: a rm z
    not really...
    z: No such file or directory
    so the rm actually was executed

    a=$x is an environment variable attribution, so $y becomes an executed command...
    And that one works on any unix/linux
    Recently patched in chkrootkit (CVE-2014-0476)

    • by AC-x ( 735297 )

      This is because the linux commands do not respect what the manual says:
      man rm...

      rm [OPTION]... FILE...

      but in realitiy it's rather:

      rm [OTION|FILE]...

      And what happens if the malicious filename is first in the list?

  • ...so wouldn't it be more accurate to to say that computers, like bull-dozers, can be dangerous in the hands of malicious, ill-informed, inattentive, or incompetent users? If you know of any of these archetypes, try to make them smarter, but don't allow them root privileges to anything taller than an ankle-high weed. Give them some locked-down version of Windows, without admin privileges, lots of monitoring tools and features. Consider helmets, knee-pads and child safety locks.
    • by AC-x ( 735297 )

      If computers were conceived to execute user commands, then why is a command for matching file and dictionary names returning them in such a form that they become executable parameters, when they could easily be explicit filenames by adding ./ at the beginning?

      Is making what should be basic and safe housekeeping functions like chmod * and tar * dangerous really something you actually want in Linux?

  • Let me check my dictionary for the defintion of idiot:

    1. n: A user, especially super user, who uses * as an agument without first checking to see what * expands into.
    2. n: A user who leaves his directories world writeable so others can put random garbage in them.

    The one line summary for this story is bad things happen to people who use a command without knowing what the command does.

    • by AC-x ( 735297 )

      Right, so an admin tarballing the content of a user's folder is an idiot because he didn't check to make sure the shell he was using wouldn't pass any of the file names as executable attributes instead of, you know, file names?

      The one line summary for this story is bad things happen to people who use a command without knowing what the command does.

      The definition of the unix wildcard when used in the shell is:

      "The character * is a wildcard and matches zero or more character(s) in a file (or directory) name."

      Note that the definition doesn't include anything about translate filenames into other kinds of executable parameters.

  • Probably because anybody who's used the various Bourne-style shells for a while
    considers it a feature, not a bug. This is a case where the Principle of Least
    Surprise comes up with different answers for novice users and for experts:
    "What? A * can expand into an unintended command argument?" "Yeah, what *else*
    would it do - the shell is just globbing, it doesn't know for sure what the
    command will do with the parameter".

    Who asked for this feature? Can anyone give me a legitimate use case for "tar cf archive.tar *" evaluating as

    tar cf archive.tar admin.php ado.php --checkpoint=1 "--checkpoint-action=exec=sh shell.sh"

    instead of

    tar cf archive.tar "./admin.php" "./ado.php" "./--checkpoint=1" "./--checkpoint-action=exec=sh shell.sh"

  • Back in '83, a friend challenged me to remove a file name "-rf *, without causing collateral damage.

  • Article should be called 'Exploiting wildcards on GNU'.

    All of the examples in the original document work because GNU software can intersperse options and filenames freely. None of those exploits would work in BSD userland, for example.

"All the people are so happy now, their heads are caving in. I'm glad they are a snowman with protective rubber skin" -- They Might Be Giants

Working...