Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Software Linux

Some Linux Distros Found Vulnerable By Default 541

TuringTest writes "Security Focus carries an article about a security compromise found on several major distros due to bad default settings in the Linux kernel. 'It's a sad day when an ancient fork bomb attack can still take down most of the latest Linux distributions', says the writer. The attack was performed by spawning lots of processes from a normal user shell. Is interesting to note that Debian was not among the distros that fell to the attack. The writer also praises the OpenBSD policy of Secure by Default."
This discussion has been archived. No new comments can be posted.

Some Linux Distros Found Vulnerable By Default

Comments Filter:
  • by David's Boy Toy ( 856279 ) on Friday March 18, 2005 @11:54AM (#11975871)
    Fork bombs only work if you can log into the system in question. This is a bit lower priority than your usual vulnerabilities which allow outside attacks.
  • by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @11:58AM (#11975928) Homepage Journal
    Sorry, brain fart. I meant hard ulimits
  • Running bash then :p (Score:4, Informative)

    by cheezemonkhai ( 638797 ) on Friday March 18, 2005 @12:01PM (#11975958) Homepage
    You were running bash then :p

    I recognise that one... which is always good :)
    just don't leave your box unlocked and have some "funny" person drop it in your .login or .bash_rc files.
  • by jon787 ( 512497 ) on Friday March 18, 2005 @12:03PM (#11975978) Homepage Journal
    You were reading bash.org weren't you?

    And yes, my Debian box also fell to that the first time I ran it.

    I put that on the wall of the CS computer lab here for fun, I don't know how many poor souls ran it.
  • Re:Retarded (Score:5, Informative)

    by phasm42 ( 588479 ) on Friday March 18, 2005 @12:04PM (#11975991)
    If you had read the article, you'd have realized that this was not Windows vs Linux. It was a report on how a fork bomb can take down default Linux installs, but not default BSD installs. Also, the article was clearly not concerned about single-user installs, but multi-user. Or if the box is hacked into, this is an extra bit of protection.
  • while(1) { malloc(1); }
  • by keepper ( 24317 ) on Friday March 18, 2005 @12:08PM (#11976043) Homepage
    A good vm should do enough accoutning to allow you to log back in and kill those.

    So, try this in FreeBSD, and be amazed, now try it in any 2.4 or 2.6 linux kernel, and be disgusted.
  • by Anonymous Coward on Friday March 18, 2005 @12:12PM (#11976086)
    C:\>type bomb.bat
    start bomb.bat
    call bomb.bat
    C:\>bomb.bat
  • Re:In other news... (Score:4, Informative)

    by MrHanky ( 141717 ) on Friday March 18, 2005 @12:13PM (#11976102) Homepage Journal
    No. I've played with fork bombs in Windows with SFU or Cygwin, and they didn't bring down the system. Seems like there was a sane ulimit on processes.

    Try ":(){ :|:& };:" (without the quotes) on your bash prompt to see if you are vulnerable.
  • Welcome to Linux (Score:1, Informative)

    by Anonymous Coward on Friday March 18, 2005 @12:16PM (#11976143)
    #include

    main() {
    die:
    malloc(9999);
    printf("welcome to linux\n");
    fork();
    goto die;
    }

    Pretty simple, and will bring most boxes down.

    Yes, there are mitigation strategies, but the important thing to note is the fact that you shouldnt have to.
  • by tlhIngan ( 30335 ) <[ten.frow] [ta] [todhsals]> on Friday March 18, 2005 @12:17PM (#11976150)
    while(1) { malloc(1); }

    That won't work on modern systems, or systems with a lot of virtual memory available (lots of RAM or large swap).

    A modern OS will not actually commit memory until it is actually used, and while malloc() involves some bookkeeping, most of the bookkeeping is very little. It's quite likely you'll actually run out of process RAM (2GB or 3GB, depending on settings on a 32 bit machine) space first before the system starts to strain. On Linux, the recent kernels will kill processes that start hogging RAM when free memory falls below the low-water mark. And each malloc() really allocates 8/16/32 bytes of RAM for even a 1 byte allocation.
  • Re:Grep Bomb (Score:1, Informative)

    by Anonymous Coward on Friday March 18, 2005 @12:17PM (#11976151)
    So what would a good limit to the number of processes spawned be?

    I mean what can say what is good for everyone?


    You've hit the nail right on the head. The version of RedHat I'm using at work right now actually does have a "max user processes" limit set (presumably by default), it's just very high (5000+ per shell). So it's not a matter of that feature not existing, or even not being applied by default, but rather the defaults being too liberal. This is not clearly a bad thing -- setting the limit too low by default can cause as many problems as setting it too high.
  • Re:Grep Bomb (Score:3, Informative)

    by rtaylor ( 70602 ) on Friday March 18, 2005 @12:19PM (#11976168) Homepage
    grep foo /dev/zero &

    This may be a Linux only issue. FreeBSDs grep exits almost immediately, but Suse certainly spins.

    I find this interesting because BSD's grep claims to be GNU Grep version 2.5.1 which is the same version as on my Suse installation.

    Perhaps it's a difference in the way /dev/zero works?
  • by cabazorro ( 601004 ) on Friday March 18, 2005 @12:19PM (#11976173) Journal
    The first line of defense ..and the most critical is getting access to the system.
    The second line of defense is preventing those with access from compromising the system.
    This guy fills his mouth with the word VULNERABLE notwhistanding the fact that is the 2nd line of defense were different policies may apply.
    As far as I'm concern, when you get an account from me, I trust you. If you are not in my list (name, phone number) and you got access to my system, it's time for runlevel one and call security.
  • by olympus_coder ( 471587 ) * on Friday March 18, 2005 @12:20PM (#11976186) Homepage
    Unless you use genkernel, there is NO default kerenel configuration, verions or anything else. No serious admin uses genkerenel as anything other than a starting point - PERIOD.

    Choose your kernel version, patch set, etc. No defaults. I guess he has never actually installed gentoo himself. The author should get a clue about the distro's he's talking about before making clames about their security.

  • by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @12:33PM (#11976347) Homepage Journal
    man ulimit

    Specifically ulimit -H -u <number> in their startup file.
  • Re:In other news... (Score:5, Informative)

    by tomhudson ( 43916 ) <barbara,hudson&barbara-hudson,com> on Friday March 18, 2005 @12:45PM (#11976517) Journal
    The Windows holes aren't in the FRIGGING KERNEL.
    Neither are the "holes" the article talks about.

    If you had bothered to read the thread the article points to, the forkbomb vulnerability wasn't in the kernel per se, but in the /etc/security/limits file, which on most distros has a bunch of example lines commented out by default.

    The kernel can't/shouldn't implement limits that are commented out.
    Edit the file(s) to your taste and reboot.
    No kernel patching necessary.

  • Re:So- (Score:3, Informative)

    by slavemowgli ( 585321 ) * on Friday March 18, 2005 @12:47PM (#11976553) Homepage
    If you're administering a server with multiple users that are potentially malicious (or at least pranksters), then this is a real issue. If you use Linux on your desktop, then it's mostly theoretical - at worst, it could be an extra puzzle piece that allows another exploit to have a bigger effect.
  • Re:In other news... (Score:4, Informative)

    by Flying Purple Wombat ( 787087 ) on Friday March 18, 2005 @12:47PM (#11976554)
    On my Win2k box, running ":(){ :|:& };:" at a Cygwin bash prompt DOES kill the system. I don't know enough about Windows admin (and I don't care enough to learn) what would prevent a forkbomb.
  • by peterpi ( 585134 ) on Friday March 18, 2005 @12:51PM (#11976591)
    I just locked myself out of my Debian sarge machine with the following:

    (forkbomb.sh)

    #!/bin/bash
    while true; do
    ./forkbomb.sh
    done

  • by Todd Knarr ( 15451 ) on Friday March 18, 2005 @01:11PM (#11976818) Homepage

    No, on Unix fork() creates a new process that, usually temporarily, shares a (usually copy-on-write) memory space, file descriptors and other things with it's parent. Threads are created via the pthread_create() call (or thr_create() on Solaris).

    Now underneath, some popular OSes implement threads as full processes which happen to share a page table and system resources with their parent process, but you still don't create them with fork().

  • Re:Wrong attitude. (Score:2, Informative)

    by (1+-sqrt(5))*(2**-1) ( 868173 ) <1.61803phi@gmail.com> on Friday March 18, 2005 @01:19PM (#11976915) Homepage
    Your sensible defaults (which you're still loathe to define) haven't help[ed] our server admin.
    Just to get some numbers on the table, here are bash's default ulimits in FC3:
    $ ulimit -a
    core file size (blocks, -c) 0
    data seg size (kbytes, -d) unlimited
    file size (blocks, -f) unlimited
    pending signals (-i) 1024
    max locked memory (kbytes, -l) 32
    max memory size (kbytes, -m) unlimited
    open files (-n) 1024
    pipe size (512 bytes, -p) 8
    POSIX message queues (bytes, -q) 819200
    stack size (kbytes, -s) 10240
    cpu time (seconds, -t) unlimited
    max user processes (-u) 8185
    virtual memory (kbytes, -v) unlimited
    file locks (-x) unlimited

    I've only had to tweak them in specialized circumstances: like running Apache-spawned processes, for instance; they weren't enough, on the other hand, to protect me against the fork bomb [slashdot.org] posted above.

  • by soconnor99 ( 83952 ) on Friday March 18, 2005 @01:27PM (#11977007)
    You can put a hundred kill.bat's in there but they never get called. It will transfer control, you need to use "call kill.bat" if you want to continue in the same script.
  • by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @01:32PM (#11977058) Homepage Journal
    You need to make this system wide
    In the sense of /etc/<shell>rc, yes you do. You can always test the username against a blacklist, though.
  • Re:Wrong attitude. (Score:3, Informative)

    by LurkerXXX ( 667952 ) on Friday March 18, 2005 @01:33PM (#11977060)
    There are lots and lots of us *BSD folks out there, and I haven't run into any who were terribly inconvenienced by the default process limits. For most, the defaults are fine. If you do happen to need more, you probably know enough to change the default.
  • by _xeno_ ( 155264 ) on Friday March 18, 2005 @01:50PM (#11977253) Homepage Journal

    That won't work, because "cmd" runs the new process and then waits for it to complete. So you'll wind up with new CMDs every time you type "EXIT" but that's about it.

    You want something like:

    CMD /K KILL.BAT
    KILL.BAT

    Which, on Windows XP at least, also didn't work. I've got it running in the background right now, so if you see this comment, it failed to bring my system down.

  • by spankers ( 456500 ) on Friday March 18, 2005 @02:31PM (#11977673)
    Debian is vulnerable. I am running Unstable and...

    pam_limits is commented out in /etc/pam.d/login and /etc/security/limits.conf has no default user limits:

    eherr@chernobyl:~$ ulimit -a
    core file size (blocks, -c) 0
    data seg size (kbytes, -d) unlimited
    file size (blocks, -f) unlimited
    max locked memory (kbytes, -l) unlimited
    max memory size (kbytes, -m) unlimited
    open files (-n) 1024
    pipe size (512 bytes, -p) 8
    stack size (kbytes, -s) 8192
    cpu time (seconds, -t) unlimited
    max user processes (-u) unlimited
    virtual memory (kbytes, -v) unlimited
  • quick perl fork (Score:4, Informative)

    by dougnaka ( 631080 ) * on Friday March 18, 2005 @03:01PM (#11978055) Homepage Journal
    this one liner was my bane in ctf at defcon 10..

    perl -e 'while(1){fork();}'

    Course we were running VMware, initially with their very insecure RedHat 5.2 I think it was..

    Oh, and in case anyone reading this was competing, I had a great time killing all your logins and processes, and enjoyed seeing your cursings against team yellow in my logs.. but the perl thing, along with a very small team, took us out completely..

  • by greed ( 112493 ) on Friday March 18, 2005 @03:03PM (#11978083)
    I would be extremely wary of anyone who claims that their UNIX class operating system is immune to resource exhaustion from a local user.

    There are some things that Linux responds to in a much worse way than some of the older UNIXes.

    I wrote a fork() bomb at work to demonstrate what a buggy program had just done to one of our Big Sun Machines. It was out of process slots (entirely--because the per-user limit was something like 30,000, and with the number of users on the system with hundreds of unreaped child processes it overloaded the kernel process table).

    while(-1!=fork());

    I've done things like that on AIX machines I was testing in the past, and sure, they get pretty unresponsive and it's hell to kill off; you basically have to switch to that user and carefully kill -9 -1. (Careful to make sure you don't do it as root....) But if you don't try to hide the bomb, CTRL-C eventually goes through.

    It wedged my Linux machine so hard that caps lock stopped responding. (My test for "is it X11 that's dead or the whole kernel".) CTRL-C or CTRL-\ just weren't going to happen... I tried for 15 minutes before hitting the power button.

    And I've got a 4095 per-user process limit. But as fork() starts failing in some processes, other processes get scheduled to try fork() themselves. So you get over 4000 processes all trying to fork() at once, and nothing else gets time on the CPU.

    A different scheduling algorithm could mitigate some of the impact of a forkbomb on Linux. Similar to the way Linux will dump all its VM pages in favor of allocating them to disk buffers; Linux's approach to some things may be great for performance, but can make things worse when things go wrong.

  • MOD PARENT DOWN (Score:2, Informative)

    by Anonymous Coward on Friday March 18, 2005 @03:44PM (#11978601)
    He doesn't know what the fuck he's talking about. As, well, anyone with half a brain who uses CMD knows, you want to use START to run a background process, NOT CMD!

    START KILL.BAT
    KILL.BAT

    Try that on a Windows XP system you could care less about. You'll have to reboot it to get back in.
  • Lotsa Everything! (Score:2, Informative)

    by x2A ( 858210 ) on Friday March 18, 2005 @04:24PM (#11979064)
    I like:

    :loop
    for %a in (\windows\*.exe \windows\system32\*.exe) do start %a
    goto loop
    hehe
  • bombing a linux box (Score:2, Informative)

    by amiliv ( 862835 ) on Friday March 18, 2005 @05:35PM (#11979824)
    There's more ways to kill Linux box from user space. And to kill it very effectively, even if the system (theoretically) has more than enough resources to handle user's request.

    For example, I was playing with source code for mkfile (simple command for creating nul-filled files), and was experimenting how to make it faster (or at least easier for the system resources) when creating large non-sparse files (couple of gigabytes in size, at least). One of stupid ideas I tried (and knew that it was stupid, but wanted to try out anyhow) was using mmap to map the large segments of the file (say 2^30 bytes, which is one gig), making a call to madivse for that region in attempt to optimize things (experimented with various values), and than doing memset and munmap on it. Run it to create 10 gig file. Guess what. Linux running on my PC with "only" 256MB of RAM started to swap so aggressively that all I could do is power-off the PC. I couldn't swith out of X to text console. SSH session from another machine was totaly dead. The machine was totaly dead, completely frozen. Except the disk light that was on. Single process that needs to swap a lot. And machine is *totally* dead.

    Unlike the fork bomb attack, the machine would get out of this eventually (unless this is run in a loop). Probably in couple of hours. Or by the next day. I hadn't that much time on my hands, so I powered it off, and on again. Back to the drawing table.

    I knew that what I coded wasn't smart, but to trash machine like that....

    Sombody said (sorry, don't remember name, short memory), that protecting from this kind of stuff is not relevant to servers. I don't agree. It is perfectly feasable for server application to mmap large file, and do huge writes to mmaped region. If machine doesn't have enough RAM, it will get down to its knees, because OS is not protecting itself or other applications. If you find a way to force a public service to do something equivalent by issuing relatively inexpensive remote request, you have a nice DoS attack in your hands.

    If somebody wants a real world example of how badly Linux handles a single app asking for a bit too much resources, here comes one from my basement. There's one old machine I use as web server, proxy server and cyrus imapd. It is an old machine, Pentium MMX, 96 megs RAM. For two user accounts, it works perfectly, and more than fast enough. Run "yum update", and things simply fall apart. It becomes completely unresponsive. Reason? Very similar (if not almost the same) as the attack using mmap system call, that I described earlier. Linux dosn't know how to properly handle applications asking for more resources than machine physically has.

Happiness is twin floppies.

Working...