Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×
Security Software Linux

Some Linux Distros Found Vulnerable By Default 541

Posted by Zonk
from the change-the-settings dept.
TuringTest writes "Security Focus carries an article about a security compromise found on several major distros due to bad default settings in the Linux kernel. 'It's a sad day when an ancient fork bomb attack can still take down most of the latest Linux distributions', says the writer. The attack was performed by spawning lots of processes from a normal user shell. Is interesting to note that Debian was not among the distros that fell to the attack. The writer also praises the OpenBSD policy of Secure by Default."
This discussion has been archived. No new comments can be posted.

Some Linux Distros Found Vulnerable By Default

Comments Filter:
  • Grep Bomb (Score:5, Interesting)

    by cheezemonkhai (638797) on Friday March 18, 2005 @11:54AM (#11975866) Homepage

    So what would a good limit to the number of processes spawned be?

    I mean what can say what is good for everyone?

    Saying that if you think the fork bomb is good grep bombs are more fun and particularly good for silincing the mass of Quake 3 players in an undergraduate lab:

    'grep foo /dev/zero &' fun about 5 of them and watch the box grind to a screaming halt then eventually recover.

    Oh hang on did i just discover a new exploit :P
  • Retarded (Score:4, Interesting)

    by 0123456 (636235) on Friday March 18, 2005 @11:55AM (#11975881)
    Sorry, but this article seems pretty retarded to me. Windows is insecure because people can use IE bugs to install scumware that takes over your entire machine... Linux is insecure because ordinary users who are legitimately logged into your machine can fork off as many processes as they want? Huh?

    Sure, maybe if you're running a server that allows remote logins, you want to restrict how many processes a user can run. But as a single-user system, I want to be able to run as many processes as I choose, not be restricted by the distribution author's ideas of what's good for me.
  • by lintux (125434) <slashdot AT wilmer DOT gaast DOT net> on Friday March 18, 2005 @11:55AM (#11975884) Homepage
    I really wonder what kind of Debian installation he runs. Just a couple of weeks ago I had to reboot my Debian box after some experimenting with an obfuscated fork bomb. Won't work again now that I set some ulimits, but they're not there by default.

    In case anyone is interested, here's the obfuscated fork bomb: :(){ :&:;};:
  • "Secure By Default"? (Score:3, Interesting)

    by EXTomar (78739) on Friday March 18, 2005 @11:58AM (#11975925)
    Doesn't OpenBSD still install 'ftpd' by default? Although it is not turned 'on', the fact is it is still on the file system ready for exploit and requires rigoriously patched unless you take steps to remove it. Doesn't this seem like a dubious definition?

    I'm all for making special install kernels and distros "out of the box" to be as hardened as possible. I would love see many distros do a "paranoid" configuration. There are plenty of things OpenBSD does right but that does not excuse OpenBSD. Just like Linux and every other operating system out there, they can still strive to do better.
  • by nullset (39850) on Friday March 18, 2005 @12:03PM (#11975979)
    I came up with the idea of a ping/fork DoS attack (mostly as a joke)

    In pseudocode:
    while (true) {
    ping(target)
    fork()
    }

    I seriously thought of posting this to a few script kiddie sites, so the kiddies could crash themselves long before the pinging does any damage :)

    --buddy
  • by Anonymous Coward on Friday March 18, 2005 @12:12PM (#11976089)
    Linux users don't know what accounting is yet, because they haven't realised what *BSD is (except maybe that "OpenBSD is really secure, innit, i'm going to use it for a firewall ain't I").
  • by gowen (141411) <gwowen@gmail.com> on Friday March 18, 2005 @12:14PM (#11976111) Homepage Journal
    But if the system becomes unusable, surely it is a bug.
    I run codes *all the time* that cause my system to become completely unresponsive. By running huge numerical simulations without enough physical RAM. I'd be mightily annoyed if the kernel told me I wasn't allowed to do this.

    Users use resources. If an admin wants to starve their untrustworthy users of resources, they can (and, if they've a lot of untrustworthy users, its highly recommended), but there is simply no compelling reason why it should be turned on by default.
  • by deathazre (761949) <mreedsmith@gmail.com> on Friday March 18, 2005 @12:16PM (#11976138)
    ... where gentoo had a default kernel

    (can we PLEASE not bring genkernel into this? it sucks.)
  • Re:Big Fuss (Score:3, Interesting)

    by agraupe (769778) on Friday March 18, 2005 @12:17PM (#11976148) Journal
    The difference being that a default install of *anything*, except possibly OpenBSD, should be in a situation where there might be users on it that you don't trust completely. For my personal use of linux, all I need is a box that is secure from hackers, not users.
  • by Anonymous Coward on Friday March 18, 2005 @12:21PM (#11976195)
    I generally agree with what you're saying, but there should be "some" way for the system to differentiate between useful processes spawning tasks to achieve a computational goal, vs an abhorrent process run amuck. In one formerly well-known OS, each user had a process limit that was something sane. Special users for certain programs needed their process limit raised. This was a per-user setting and "nice" programs always warned when the process limit needed to be exceeded at which time the sysadmin could raise it if appropriate.
  • by fr1kk (810571) on Friday March 18, 2005 @12:23PM (#11976218) Homepage
    If you open up Cygwin on Windows XP, and run:

    :(){:|:&};:

    it will bring your system to a halut (mine at least).

    Currently I've got a 2.8Ghz 512Mb ram, XP SP2.

    I couldnt even get into my task manager, I got about 10 virtual memory errors. Then I rebooted and tried with the task manager open. Once the VM graph shot striaght up past 1GB, it stopped refreshing (4 seconds).
  • Re:In other news... (Score:3, Interesting)

    by caluml (551744) <slashdot@nOsPAM.spamgoeshere.calum.org> on Friday March 18, 2005 @12:23PM (#11976222) Homepage
    Try a WinExec bomb instead. You can bring down the system in 2-3 minutes.

    Is there a ulimit equivalent for Win32?

    Just as an after thought, why don't people run public Windows servers and allow people to login, like Unix shell servers?

  • by Xugumad (39311) on Friday March 18, 2005 @12:25PM (#11976235)
    And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).

    *shuffles nervously* So, out of curiousity, for my.. . err... desktop... how do I stop this exactly? :)
  • by gowen (141411) <gwowen@gmail.com> on Friday March 18, 2005 @12:25PM (#11976246) Homepage Journal
    but there should be "some" way for the system to differentiate between useful processes spawning tasks to achieve a computational goal, vs an abhorrent process run amuck.
    Yes, they're should. But, sadly, that problem is one that is generally considered hard by computer scientists. I didn't follow it closely, but the debates around the heuristics for the Linux kernel out-of-memory process killer tells me that it really isn't easy to do.

    Maybe I'm just biased, because last year, I had a horrible time overcoming some hard-coded limits (on stack size) when trying to run some Fortran code on a SunOS box.
  • by gosand (234100) on Friday March 18, 2005 @12:31PM (#11976308)
    I was working at my first job, and we had 10 people to a Sun server. I was writing shell scripts, and wrote one called fubar that was this:

    while [1] do
    fubar &
    done

    Then I did chmod +x fubar, and typed "fubar &"

    oops.
    The system load started climbing and everyone else on the machine started bitching. I thought it would crash, and went over to the local admin and fessed up. Of course, we were all interested in what would happen. Nobody could get in to kill it, and the processes were spawning so fast that we couldn't catch it. It was taking forever just to log int. But the load leveled off, and it wasn't going to crash. The admin was going to reboot it, but then I said "wait a second!" I went back to my window that was open. Know what I typed?

    rm -f fubar

    I suppose you could make it more nasty by making the file name create a copy of itself and name it the process id, so that you wouldn't be able to rename it.

    cp $0 .$$
    ./.$$ &

    This will leave all the process files laying around, but you could code something to remove them. But this gets the point across.

  • Re:In other news... (Score:1, Interesting)

    by Anonymous Coward on Friday March 18, 2005 @12:34PM (#11976364)
    AFAIK, no, the TCP/IP stack ISNT in the windows kernal. it is a "network protocol plugin" so to speek, much like IPX/SPX, NetBEUI, etc.
  • by ArbitraryConstant (763964) on Friday March 18, 2005 @12:40PM (#11976455) Homepage
    "???@mylinuxbox ~ $ grep foo /dev/zero
    grep: /dev/zero: Cannot allocate memory
    "

    I don't think they're killed automatically. They seem to be running out of memory.

    Of course, the same thing on my OpenBSD box doesn't run out of memory... Either a the GNU grep has a memory leak or the BSD grep has a check for something (long lines?).
  • by IDontAgreeWithYou (829067) on Friday March 18, 2005 @12:59PM (#11976693)
    I used to work at a university that gave all students, faculty and staff unix accounts. A CS professor told us a story of a CS student in an operating systems course who wrote a program that forked so much they had to reboot the server. When some of the administration found out why the server had to be unexpectedly rebooted they wanted to punish the student, but thankfully the IT guys were able to save him. He definitely should not have had the ability to write such a program. Of course, after hearing this story everyone tried it, it didn't cause any problems.
  • by rudabager (702995) on Friday March 18, 2005 @01:11PM (#11976826) Homepage
    put this in notepad, save it as kill.bat then run it

    cmd
    cmd
    cmd
    cmd
    cmd
    cmd
    cmd
    cmd
    cmd
    cmd
    cmd
    kill.bat
    kill.bat
    kill.bat

    I can drop my ath 2000 512mb system in say oh 10 seconds. Increase the amount of cmd's and kill.bat's for added effect :)
  • by ArbitraryConstant (763964) on Friday March 18, 2005 @01:22PM (#11976950) Homepage
    Please explain how the ftpd binary (not suid) can be used to exploit the system in ways that the user otherwise could not.

    Taking away the ftpd binary wouldn't stop the user from doing exactly the same thing by some other means. For example, they could simply download the source and compile a new one by themselves. Or use Perl. Or compile a binary somewhere else and download that.
  • by puke76 (775195) on Friday March 18, 2005 @01:32PM (#11977056) Homepage
    The Gentoo discussion thread, with hard and soft resource limit recommendations, is here. [gentoo.org]
  • by Anonymous Coward on Friday March 18, 2005 @01:33PM (#11977063)
    The way that virtual memory is accounted for makes it totally impossible to provide for a "normal user" (e.g. someone running GNOME) and absolutelu prevent exhaustion of system resources on a modest desktop machine. The situation in which the user runs an OpenGL screensaver (colossal VM size due to userspace mapping say 256MB of graphics card) can't be distinguished in this way from the situation in which they allocate 256MB of RAM-backed VM and then strobe through it touching one byte at a time. One is normal usage, the other brings the machine to its knees.

    Here's another example... it sounds OK to allow a user to run 24 processes, right? And it sounds OK to allow them to allocate 100MB of memory, right? But that's 2.4GB of memory you just permitted! And there's no way (due to lack of per-page inspection for performance reasons) for the kernel to distinguish if they've really got 2.4GB or if each process is sharing 98% of that memory with the rest.

    Machines that are locked down enough to actually _prevent_ (not detect, but prevent) resource exhaustion are usually quite useless. Very few scenarios require you to do this, it's often enough to prevent trivial fork-bombs and put up a big warning saying "We catch you breaking it, we make you pay".
  • by wsanders (114993) on Friday March 18, 2005 @01:45PM (#11977214) Homepage
    In the past I've been awakened in the dead of night by my IDS'es detecting a fork bomb and it's always been self inflicted - some dumbass^H^H^H^H^H^H^Hmisinformed programmer not understanding why it's important to check the return status of a fork() or set alarms to kill off hung sockets.

    I found some earlier kernels ignored RLIMIT_CPU, RLIMIT_RSS, and RLIMIT_NPROC, and setting the CPU and RSS limits in Apache was ineffective. This was in the Red Hat 9 / 2.4.20 kernel days. I have not researched this in a year or so. If all this stuff works now, let me know so I can insert a "ulimit -Hu 10" in future startup scripts as a "courtesy" to inattentive programmers.
  • by Wybaar (762692) on Friday March 18, 2005 @01:55PM (#11977294)
    Everybody makes mistakes. Having a high-but-finite and changeable limit prevents a mistake made by a trusted user from getting out-of-control while still allowing that user to get close to the threshold of control.

    For instance, imagine writing a recursive program to calculate the factorial of a number (note: no error checking included, and I'm letting the kth factorial number for k
    function y = factorial(x)
    if x <= 2
    y = 1;
    else
    y = factorial(x-1) + factorial(x-2);
    end

    Now assume that instead of typing (x-1) in the recursive call on the next to last line, the user types (x+1). If the user calls this with x > 2 and you don't have some limit on the number of recursive calls a function can make, this program will never end (unless you exceed the stack limit, which is not a graceful way to exit.) If there is a limit, the program will hit that limit and error, giving the programmer a chance to catch their typo. If the user really was interested in blowing the stack and they have the authority to change the recursion limit, they can do so if they want ... but they have to explicitly point the recursive gun at their foot and explicitly pull the trigger. They won't get shot accidentally.
  • Re:Run this: (Score:3, Interesting)

    by Foolhardy (664051) <csmith32&gmail,com> on Friday March 18, 2005 @03:10PM (#11978149)
    Hmmm, here on my Windows box:
    jobprc -prclimit 50 c:\cygwin\bin\bash
    bash-2.05b$ :(){ :|:&};:
    bash: fork: Permission denied
    ... (about 100)
    bash: fork: Permission denied
    3 seconds later, and it's over with the last two bashes stalled, waiting for ctrl-c. The same thing happens under SFU, but it displays less Permission denied messages. The rest of the machine didn't even notice.

    That's what job limits [microsoft.com] are for. Shameless plug: get jobprc, a GPL'd command-line job creator (written by me) here [comcast.net].
  • by schon (31600) on Friday March 18, 2005 @03:13PM (#11978182)
    You need to make this system wide or the end user WILL change it on you.

    How? I thought by default that you have to be root to change the ulimit (either up or down)
    karl@spanky:~$ uname -a
    Linux spanky 2.4.29 #6 Thu Jan 20 16:30:37 PST 2005 i686 unknown unknown GNU/Linux
    karl@spanky:~$ ulimit -H -u 15000
    -bash: ulimit: max user processes: cannot modify limit: Operation not permitted
    I tried the :(){:|:&};: test on Slackware 10, and it failed - load rose for about 30 seconds, then went back to normal - the command was halted.
  • by joxeanpiti (789529) on Friday March 18, 2005 @03:28PM (#11978399) Journal
    Easy to modify the "fork bomb" script for Windows.

    start cmd
    start cmd
    start cmd
    start cmd
    start cmd
    start cmd
    start cmd
    start cmd
    start cmd
    start kill.bat
    start kill.bat
    start kill.bat
    start kill.bat
    start kill.bat
    start kill.bat

    Try it! My machine has been freezed in about 15 seconds.
  • by Nevyn (5505) * on Friday March 18, 2005 @04:27PM (#11979103) Homepage Journal
    • Sendmail -- most BSDs use it ... craps itself under load, and is a major PITA to cleanup the queue.
    • Attach the disks, have multiple procs read/writting data as fast as possible (you don't need much disk space, writting small files is fine).
    • Using recursive SCM_RIGHTS to use huge numbers of PF_LOCAL sockets (it isn't accounted after SCM_RIGHTS).
    • Find a simple setuid() app. and fork/exec bomb to that.
    • BIND is fairly simple to DOS attack from the local machine.
    • HTTP is also likely to be easy to DOS, due to needing 1 proc per connection ... where you, as the attacker, don't need anything like that.
    • portmap almost always doesn't do any auth checks on assign X service to Y port calls ... so that would easily take out NFS, FAM and/or any other RPC services you are running (re-direct the ports to HTTP for even more fun).
    • Run something like: (while :; do; mkdir a; cd a; done) ... takes out more than a few directory walking applications (often including backups).

    There's probably some more obvious ones, if I thought about it a bit.

    I guess I wouldn't mind if Fedora came with defauts that stopped a forkbomb on my box, if I was stupid enough to run one ... but if they fucked it up and set it too low I guarantee I'd be very pissed.

  • by agraupe (769778) on Friday March 18, 2005 @06:20PM (#11980286) Journal
    I know you meant this as a joke, but it brings up an interesting question: why is SSH enabled by default on most linux distros? As Linux moves toward higher desktop use, does it really make sense to have remote access enabled by default? Surely if you need or use this functionality (as I do), you can determine how to activate it yourself, no?
  • by planckscale (579258) on Friday March 18, 2005 @06:30PM (#11980368) Journal
    I'm not 100% sure but I believe a Knoppix Live CD and a newly installed Knoppix hard drive install has this disabled by default. To which distro's in particular are you referring?

  • by JabberWokky (19442) <slashdot.com@timewarp.org> on Friday March 18, 2005 @07:44PM (#11980912) Homepage Journal
    Thinking about it after my post, I came up with the following criteria for this test on OSes:

    Fail - The OS grinds to a halt and after waiting five minutes, there is no recourse other than cutting power and rebooting.

    Pass (one/a) - The OS grinds, but the user can navigate to a feature (console, Control-ESC in KDE, etc) that allows the user to quickly kill the offending process(es).

    Pass (one/b) - The OS grinds or not, but regardless automatically kills the offending process. I consider this to be a worse level than the following.

    Pass (two) - The OS grinds and automatically offers the user a method to kill the offending process(es). (KDE had something like that at one point. It may still). Sometimes you may want to let it run.

    Pass (three) - The OS does not grind and upon approaching resource limits, offers the user an option to shut down the process or let it run.

    And finally...

    Pass (four) - The OS does not grind and upon approaching resource limits, offers the user an option to increase limits, potentially by moving the process to another machine or allocating more systems to the process. Presumably the migration would be automatic (and transparent) if the resource limits were higher than the local machine.

    Obviously, "user" may be a combination of user and admin in certain cases (multi-user machine, accessing other machines off a local cluster, etc).

    --
    Evan

"Mach was the greatest intellectual fraud in the last ten years." "What about X?" "I said `intellectual'." ;login, 9/1990

Working...