Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Software Linux

Some Linux Distros Found Vulnerable By Default 541

TuringTest writes "Security Focus carries an article about a security compromise found on several major distros due to bad default settings in the Linux kernel. 'It's a sad day when an ancient fork bomb attack can still take down most of the latest Linux distributions', says the writer. The attack was performed by spawning lots of processes from a normal user shell. Is interesting to note that Debian was not among the distros that fell to the attack. The writer also praises the OpenBSD policy of Secure by Default."
This discussion has been archived. No new comments can be posted.

Some Linux Distros Found Vulnerable By Default

Comments Filter:
  • by madaxe42 ( 690151 ) on Friday March 18, 2005 @11:51AM (#11975835) Homepage
    Kittens are vulnerable to forks by default as well - you can easily get at the kernel if you just - oh, hang on, a different kind of fork, you say?
  • by Anonymous Coward on Friday March 18, 2005 @11:52AM (#11975841)
    Thank god I use Windows, I'm safe!
  • by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @11:53AM (#11975853) Homepage Journal
    Sorry but the ability for a non-privileged user to run as many programs as the like is a feature, not a bug. Inability to turn that feature off would be a bug, but given that few modern Linux boxes are actually used as multi-user remote-login accounts, it's a completely unecessary overhead.

    And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).
    • by Anonymous Coward
      And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).

      Hope you're not administrating any multi-user Linux boxes then, since in Linux, the quotas only deal with drive space ;)
    • Wrong attitude. (Score:5, Insightful)

      by Anonymous Coward on Friday March 18, 2005 @12:00PM (#11975942)
      All my servers have multiple users. Those users are system accounts to run different software, and I do not want any of them to be able to cause a problem to the entire server. Reasonable limits should be in place by default, and those of us who actually need higher limits for certain users, can raise those limits.

      Even on a single user desktop machine, its nice to have limits so shitty software can't take down my entire machine. With limits I can just log in on another terminal and kill the offending program, without limits you get to reboot, and lose any work you were doing.
      • Re:Wrong attitude. (Score:3, Insightful)

        by gowen ( 141411 )
        Reasonable limits should be in place by default,
        But given that distribution/kernel vendors do not have the first idea of
        i) My hardware
        ii) How many users I want
        iii) What programs / services will be running,

        how in the name of crikey are they supposed to determine what a "Reasonable limit" would be?
    • by CaymanIslandCarpedie ( 868408 ) on Friday March 18, 2005 @12:05PM (#11976003) Journal
      Sounds much like the same reasoning MS used to use for having defaults set to a "user-friendly" setting.

      Now that its been found in Linux, its a "feature" ;-)

      Come on, I love Linux but the hypocrocy is a bit much ;-) Its OK to admit it was bad or admit MS's settings were OK, but you cannot do both.
    • I don't know about how it works nowadays. But when I was new to UNIX, I would write the following program:
      int main() {
      while(1)
      fork() ;

      return 0 ;
      }
      Compiling and running it would hang the box. You could ping the system, but nothing else would would work.

      Ultimately, I would have to switch the box off and on again. And I remember thinking that this was a bug.

      A user should be allowed to do whatever he/she wants. But if the system becomes unusable, surely it is a bug.
      • int main() {
        while(1)
        fork() ;

        return 0 ;
        }


        On a modern Unix/Unix-like system, you often have Perl. Save yourself the effort of compiling:

        perl -e 'while(1){fork()}'

        One thing I always liked to do was run this for about 1 minute, hit Ctrl-C, and see how long until the kernel finally manages to reap all the child processes and the system returns back to normal. Usually can take anywhere from 30 seconds to a couple of minutes before the system becomes responsive again.

        (It *is* a great way to get impressive lo
    • by kfg ( 145172 ) on Friday March 18, 2005 @12:20PM (#11976179)
      Sorry but the ability for a non-privileged user to run as many programs as the like is a feature, not a bug.

      Sorry, but the ability of a mail reader to automagically run as many programs as it likes is a feature, not a bug.

      The point being that while this may, in some rare cases, be desirable, it shouldn't be the default setting, but rather something that the adminstrator has to enable for that rare user for which it is deemed both necessary and desirable.

      "Able" and "capable of" are not the same thing.

      It shouldn't be the responsibility of the admin to turn on every possible security feature, but rather to turn off only those ones he deems gets in the way of the functioning of his system.

      It's exactly this lacadasical approach to security that has made Windows the hell hole that it is. It certainly puts money in my pocket trying to fix it all, over and over and over again, but I'd far rather be spending my time and earning my money doing something useful.

      Like computing.

      KFG
    • by Xugumad ( 39311 ) on Friday March 18, 2005 @12:25PM (#11976235)
      And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).

      *shuffles nervously* So, out of curiousity, for my.. . err... desktop... how do I stop this exactly? :)
      • by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @12:33PM (#11976347) Homepage Journal
        man ulimit

        Specifically ulimit -H -u <number> in their startup file.
    • And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).

      Or you can switch back to MS-DOS. Just one process; ultimate security!
    • by Mr. Underbridge ( 666784 ) on Friday March 18, 2005 @12:40PM (#11976465)
      Sorry but the ability for a non-privileged user to run as many programs as the like is a feature, not a bug. Inability to turn that feature off would be a bug, but given that few modern Linux boxes are actually used as multi-user remote-login accounts, it's a completely unecessary overhead.

      Right, it's a feature, but the question isn't whether it should ever be allowed, but what the default setting should be. I think the article made a pretty good case that default should be no.

      And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).

      First, I think a lot of unix people would be shocked to find that's on by default as the writer was. Second, that basically means that anyone who successfully hacks into a user account takes the machine down. That applies for your desktop machine, not just "old-style" unix type servers. Third, you mention the relative scarcity of old style servers these days - they're still more common than a user who needs to run an INFINITE number of programs. Even capping somewhere in the thousands would work, keeping anyone from being hampered in their work.

      Basically, this is a case of idea vs. reality. You want the IDEA that you can run as many programs as you want, though you'll never need to. So in REALITY, a sane cap never hurts you. However, a lack of a cap provides very REAL security problems, either from a user or from someone who manages to hack a user account. Again, you really don't want EVERY userland exploit to lead to a kernel takedown, do you?

    • by Stephen Samuel ( 106962 ) <samuel@NOsPaM.bcgreen.com> on Friday March 18, 2005 @01:14PM (#11976850) Homepage Journal
      Sorry but the ability for a non-privileged user to run as many programs as the like is a feature, not a bug.

      Just how many regular users expect to run 20000 processes at once? (or even 200?) When that happens it's almost always caused by a bug (or malicious activity). Right now, I have 50 user processes running. I'm a power user, but I'd probably never get blocked by a limit of 1000 unless I was doing something really wierd -- and something that weird should come with instructions on modifying the kernel settings.

      Yes, it should always remain possible to set up your system so that you can run massive numbers of processes and/or threads, but the default should be to keep numbers to a dull roar in favour of system stability. People whose needs are such that they actually and legitimately want to fork massive numbers of processes are also the kinds of people who wouldn't have a hard time figuring out how to change the kernel settings to allow it.

      As such, the default should err on the side of security, but allow a knowledgable user to do whatever the heck he wants.

      Thing is, though, that local resource-exhaustion exploits are difficult to set. You want to allow a user felatively free reign -- even to the point of stressing the system, but still allow enough of a reserve so that an admin cam login and shut down a user who'se gone overboard. You also want to set a limit that will be reasonable 5 years down the road when processors are 10 times as fast (and/or 20-way SMP is standard issue)

      Something to note here in Linux's favour: Even though the forkbomb brought the system to it's knees it stayed up. Although it might have taken 1/2hour to do, an admin may have been actually able to login and kill the offending process.

  • Grep Bomb (Score:5, Interesting)

    by cheezemonkhai ( 638797 ) on Friday March 18, 2005 @11:54AM (#11975866) Homepage

    So what would a good limit to the number of processes spawned be?

    I mean what can say what is good for everyone?

    Saying that if you think the fork bomb is good grep bombs are more fun and particularly good for silincing the mass of Quake 3 players in an undergraduate lab:

    'grep foo /dev/zero &' fun about 5 of them and watch the box grind to a screaming halt then eventually recover.

    Oh hang on did i just discover a new exploit :P
    • by keepper ( 24317 ) on Friday March 18, 2005 @12:08PM (#11976043) Homepage
      A good vm should do enough accoutning to allow you to log back in and kill those.

      So, try this in FreeBSD, and be amazed, now try it in any 2.4 or 2.6 linux kernel, and be disgusted.
      • $ grep foo /dev/zero
        grep: memory exhausted
        $ ulimit -a
        time(cpu-seconds) unlimited
        file(blocks) unlimited
        coredump(blocks) 0
        data(kbytes) unlimited
        stack(kbytes) 8192
        lockedmem(kbytes) unlimited
        memory(kbytes) 81920
        nofiles(descriptors) 1024
        processes 3071

        The default for memory is unlimited, which does indeed create a DOS "attack" for "grep bomb" and other inadvertant application bugs.

        This is a case of bad defaults, not a kernel problem. I recommend a max physical memory of no more than 1/4 ph

    • Re:Grep Bomb (Score:3, Informative)

      by rtaylor ( 70602 )
      grep foo /dev/zero &

      This may be a Linux only issue. FreeBSDs grep exits almost immediately, but Suse certainly spins.

      I find this interesting because BSD's grep claims to be GNU Grep version 2.5.1 which is the same version as on my Suse installation.

      Perhaps it's a difference in the way /dev/zero works?
  • Yawn. (Score:3, Insightful)

    by BJH ( 11355 ) on Friday March 18, 2005 @11:54AM (#11975869)
    So what? Anybody in their right mind would have locked down their box if they're letting third parties access it remotely.

    Running around screaming "FORKBOMB! FORKBOMB! The sky's falling in!" seems to be a common pattern every few years. If you know what you're doing, it's trivial to prevent and if you don't know what you're doing, why are you running a public box?
    • Re:Yawn. (Score:4, Insightful)

      by Taxman415a ( 863020 ) on Friday March 18, 2005 @01:53PM (#11977281) Homepage Journal
      This level of fanboyness is unbelievable. Well actually I should not be surprised, blindness to linux's faults is endemic here on Slashdot.

      The author is not "Running around screaming...", he is simply very surprised that a local user can exhaust a system so easily. Maybe every single admin should think of all n possible security problems every single time they take a box live, but people are human.

      Which one is worse: limits in place by default so that an admin needs to know how to raise them when necessary and the forkbomb would not work, or no limits in place and having to know to set them or else the box can be brought to its knees? Secure by default or free for all.

      I suppose you think every default install should also have telnetd enabled by default because any admin with half a brain should know how to turn that off? Point is admins are fallible, the default should be the lower total risk/cost option. I think which one that is is clear here.
    • Re:Yawn. (Score:3, Insightful)

      by RedBear ( 207369 )
      Stability is supposed to be one of the selling points of Linux. What exactly is the benefit in having a default setting that allows any user on any Linux desktop computer to lock up their machine just by starting up some application or script that consumes too many resources? Thanks but no thanks. I want both my server and my personal desktop computer to be able to recover from such things without a hard reboot. Whether they are malicious or just a mistake in some shell script doesn't matter. If it takes do
  • by David's Boy Toy ( 856279 ) on Friday March 18, 2005 @11:54AM (#11975871)
    Fork bombs only work if you can log into the system in question. This is a bit lower priority than your usual vulnerabilities which allow outside attacks.
  • Retarded (Score:4, Interesting)

    by 0123456 ( 636235 ) on Friday March 18, 2005 @11:55AM (#11975881)
    Sorry, but this article seems pretty retarded to me. Windows is insecure because people can use IE bugs to install scumware that takes over your entire machine... Linux is insecure because ordinary users who are legitimately logged into your machine can fork off as many processes as they want? Huh?

    Sure, maybe if you're running a server that allows remote logins, you want to restrict how many processes a user can run. But as a single-user system, I want to be able to run as many processes as I choose, not be restricted by the distribution author's ideas of what's good for me.
    • Re:Retarded (Score:5, Informative)

      by phasm42 ( 588479 ) on Friday March 18, 2005 @12:04PM (#11975991)
      If you had read the article, you'd have realized that this was not Windows vs Linux. It was a report on how a fork bomb can take down default Linux installs, but not default BSD installs. Also, the article was clearly not concerned about single-user installs, but multi-user. Or if the box is hacked into, this is an extra bit of protection.
      • Re:Retarded (Score:2, Insightful)

        by 0123456 ( 636235 )
        "It was a report on how a fork bomb can take down default Linux installs,"

        Yes, and? I don't care about fork bombs, since I don't run them on my PC... being able to run as many processes as I choose on that PC is a feature, not a flaw. I do care about having scumware remotely installed on my PC through security holes in applications and the operating system, which is a flaw, not a feature.

        Seriously, if you're letting people log onto your PC and run fork bombs, you have far greater problems than a lack of r
        • Re:Retarded (Score:5, Insightful)

          by phasm42 ( 588479 ) on Friday March 18, 2005 @12:56PM (#11976660)
          Two things: 1. Just because you don't care doesn't mean other people won't care. A lot of people (especially in a business environment) do have more than one person logging. 2. The article is trying to point out something that Linux installs could improve on. That is all.
        • Re:Retarded (Score:3, Insightful)

          by Yaztromo ( 655250 )

          Yes, and? I don't care about fork bombs, since I don't run them on my PC... being able to run as many processes as I choose on that PC is a feature, not a flaw.

          A fork bomb doesn't necessarily have to be due to a purposeful attack. A software bug can easily cause a fork bomb by going into an endless loop launching new processes. Should this take your system completely down?

          Admittedly, you probably shouldn't bee seeing such a serious bug in any release software. But what if you're a developer, and have

        • by Paradox ( 13555 ) on Friday March 18, 2005 @01:25PM (#11976988) Homepage Journal
          If you follow the link in the article to the original entry from security focus, [securityfocus.com] you'd see that malicuous remote user comprimised a machine that was patched up to current.

          Seriously, if you're letting people log onto your PC and run fork bombs, you have far greater problems than a lack of resource limits in the default install.


          Look, you seriously misunderstand something here. Run a server long enough and it gets very likely that even with the latest patches, you will get attacked. If someone breaks into your box, exactly how much power do you want them to have?

          The ability to bring the machine to a screeching halt with an attack that dates back to the Land Before Time is not a feature! It is a security hole and it's every bit as important to fix as your exterally visible holes.

          Because, one of these days some cracker is going to get the drop on your box. You'd better hope your box is ready for that.
      • -5 Advocacy (Score:3, Insightful)

        by ajs ( 35943 )
        Honestly, when will Slashdot stop posting trolls as stories. This is a clear case of "BSD is better than Linux because feature X has a DEFAULT that BSD people think is wrong". There's no security implication in the sense that most people would think of it (no remote root exploit, no remote exploit, no remote priv escalation, no remote DoS, no local root exploit, and no local priv escalation... just a local DoS... if that's what you were looking for, I can show you some OpenGL code you can throw at the displ
    • And if you run some badly coded app that has an unintended fork-bomb included from a screwup in some routine? Your machine freezes. It's not going to happen often, but why let it? Why not set some sane limits, and let the user modify those limits later as they see fit?
    • Re:Retarded (Score:5, Insightful)

      by Alioth ( 221270 ) <no@spam> on Friday March 18, 2005 @12:38PM (#11976415) Journal
      *Any* local exploit is *also* a potential remote exploit (just like the IRC conversation shows). I had someone nearly pwn a box of mine by using an exploit in a buggy PHP script, then trying to elevate privileges through a local exploit.

      Had I not considered local exploits important, I'd have had one nicely hacked box.
  • by lintux ( 125434 ) <slashdot@wilRASP ... .net minus berry> on Friday March 18, 2005 @11:55AM (#11975884) Homepage
    I really wonder what kind of Debian installation he runs. Just a couple of weeks ago I had to reboot my Debian box after some experimenting with an obfuscated fork bomb. Won't work again now that I set some ulimits, but they're not there by default.

    In case anyone is interested, here's the obfuscated fork bomb: :(){ :&:;};:
  • by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Friday March 18, 2005 @11:56AM (#11975891) Homepage Journal
    A forkbomb is just a relatively simplistic way to mount a resource exhaustion attack. I would be extremely wary of anyone who claims that their UNIX class operating system is immune to resource exhaustion from a local user. There's just too many resources that can be commandeered, and to lock them all down would leave you with a system that's so restricted as to be nearly useless as a general computing platform.

    It must be a slow day on /. if they're reporting this as news.
    • I would be extremely wary of anyone who claims that their UNIX class operating system is immune to resource exhaustion from a local user.

      Eh? Most modern UNIX systems let you put some hard limits on all the collective ways that users can consume resources, including # processes, disks space, real/virtual memory, cpu time, etc. Any administrator who is responsible for a multi-user system should have those set to "reasonable" values, and no individual user (except for the administrator of course) would be abl

  • by n0dalus ( 807994 ) on Friday March 18, 2005 @11:58AM (#11975915) Journal
    On the 3 distros listed as vulnerable, the default settings would stop any remote person from having a chance of getting a shell open on the box to perform the fork attack in the first place.
    If a person has enough access to the machine to be able to "forkbomb" it, then there's plenty of other nasty things you could do to it.
  • "Secure By Default"? (Score:3, Interesting)

    by EXTomar ( 78739 ) on Friday March 18, 2005 @11:58AM (#11975925)
    Doesn't OpenBSD still install 'ftpd' by default? Although it is not turned 'on', the fact is it is still on the file system ready for exploit and requires rigoriously patched unless you take steps to remove it. Doesn't this seem like a dubious definition?

    I'm all for making special install kernels and distros "out of the box" to be as hardened as possible. I would love see many distros do a "paranoid" configuration. There are plenty of things OpenBSD does right but that does not excuse OpenBSD. Just like Linux and every other operating system out there, they can still strive to do better.
    • Are you on drugs? (Score:2, Insightful)

      by Anonymous Coward
      Why is the word on in quotes? Yes, ftpd is part of the system. No, it is not running. No, it is not ready for exploit since as mentioned, its not running, and also, what vulnerabilities does it have? That likes saying openbsd is bad because it ships with popa3d. Its right there waiting to be exploited, if you are root, and start it up, and someone finds an exploit for it.
    • by ArbitraryConstant ( 763964 ) on Friday March 18, 2005 @01:22PM (#11976950) Homepage
      Please explain how the ftpd binary (not suid) can be used to exploit the system in ways that the user otherwise could not.

      Taking away the ftpd binary wouldn't stop the user from doing exactly the same thing by some other means. For example, they could simply download the source and compile a new one by themselves. Or use Perl. Or compile a binary somewhere else and download that.
  • by Anonymous Coward on Friday March 18, 2005 @11:59AM (#11975935)
    Unprivileged user can take down entire system by unplugging machine from power socket.
  • ... some birds fly south for the winter, my belly sometimes makes gurgling noises and jam tastes nice on toast.

    So what? Publish the vunerabilities, patch them, move on. Sheesh..

  • by drsmack1 ( 698392 ) * on Friday March 18, 2005 @12:02PM (#11975967)
    Looks like everyone out there on slashdot think this is not really a problem. Remember when it was discovered that you could get into a xp installation locally with a win 2000 boot cd? Oh, the howling that was heard.

    Here is a issue that can be done remotely with only a user account.
    • Here is a issue that can be done remotely with only a user account.

      This is a fork bomb (a DoS technique), not 100% access. With this, all your secret files remain safe.

    • Anything that requires you to have a valid user account on the machine you want to attack is, by definition, not remote.
  • by nullset ( 39850 ) on Friday March 18, 2005 @12:03PM (#11975979)
    I came up with the idea of a ping/fork DoS attack (mostly as a joke)

    In pseudocode:
    while (true) {
    ping(target)
    fork()
    }

    I seriously thought of posting this to a few script kiddie sites, so the kiddies could crash themselves long before the pinging does any damage :)

    --buddy
  • If you don't upgrade your system sufficiently before giving our shell accounts, you're an idiot. If you are joe schmoe and using it as a desktop - you're not giving out user accounts.

    Yes, it may be sad to find - but honestly people, local shell exploits exist 'out of the box' - period. It's *pretty much* unavoidable even after proper sandboxes and restrictions have been configured.

    And, as a Debian user - I am both insulted and disgusted that it was arbitrarily singled out, I assume this was because of i
  • by XO ( 250276 ) <blade,eric&gmail,com> on Friday March 18, 2005 @12:06PM (#11976008) Homepage Journal
    while(1) { malloc(1); }
    • by tlhIngan ( 30335 ) <slashdot.worf@net> on Friday March 18, 2005 @12:17PM (#11976150)
      while(1) { malloc(1); }

      That won't work on modern systems, or systems with a lot of virtual memory available (lots of RAM or large swap).

      A modern OS will not actually commit memory until it is actually used, and while malloc() involves some bookkeeping, most of the bookkeeping is very little. It's quite likely you'll actually run out of process RAM (2GB or 3GB, depending on settings on a 32 bit machine) space first before the system starts to strain. On Linux, the recent kernels will kill processes that start hogging RAM when free memory falls below the low-water mark. And each malloc() really allocates 8/16/32 bytes of RAM for even a 1 byte allocation.
  • by aendeuryu ( 844048 ) on Friday March 18, 2005 @12:07PM (#11976021)
    It's funny, isn't it, that on the same day we have a story about Linux distros being insecure by default, EXCEPT Debian, we have another story where Debian is being criticized for not releasing updates more often.

    Maybe, and here's a thought, just maybe, it's wise to take a decent, stable distro and perfect it, instead of taking a distro and submerging it in a state of perpetual flux with constant updates.

    Just a thought. I might be biased because it's a Debian-based distro that finally put a working Linux on my laptop. But you know what? Every now and then the bias is there for a reason...
  • Silly exploit (Score:5, Insightful)

    by SmallFurryCreature ( 593017 ) on Friday March 18, 2005 @12:08PM (#11976031) Journal
    As others have already commented this has little to do with security.

    Most linux systems are used as desktops, if you use them as a server you don't use the defaults. Now a user being able to crash his own system is nothing new. It ain't nice but as long as it is the user doing it then no problem. Now if this fork could be used to make apache explode and bring down the system THAT would be a boo boo.

    Ideally yes the system should not do things that bring it coming crashing down but this is close to blaming a car for allowing me to plow into a wall. Not sure if I want a car/computer telling me what I can and cannot do.

    As to how to set the limits on the number of forks. Maybe I got this completly wrong but could it be that this depends entirely on your hardware? Perhaps the latest IBM mainframe can handle a few more then an ancient 386? How the hell is the distro supposed to know what I got?

    Security is other people doing stuff on my computer that I don't want and or know about. Me screwing stuff up is my business.

    BSD is very solid, this is known. It is also known that BSD has been along long before linux and but has been sucking it exhaust fumes ever since it arrived. For every story about how much more secure BSD is there are a dozen stories about linux actually making a mark on the world. So good. Your BSD survived a forkbomb. But why exactly was the author running a linux desktop then if BSD is so much better?

    Another non-story on /. Is the internet going to the way of tv?

    • "As to how to set the limits on the number of forks. Maybe I got this completly wrong but could it be that this depends entirely on your hardware? Perhaps the latest IBM mainframe can handle a few more then an ancient 386? How the hell is the distro supposed to know what I got?"

      man 2 setrlimit

      "RLIMIT_NPROC
      The maximum number of processes that can be created for the real
      user ID of the calling process. Upon encountering this limit,
      fork() fails with the error EAGAIN."

      It's part of POSIX. It should work th

  • by olympus_coder ( 471587 ) * on Friday March 18, 2005 @12:20PM (#11976186) Homepage
    Unless you use genkernel, there is NO default kerenel configuration, verions or anything else. No serious admin uses genkerenel as anything other than a starting point - PERIOD.

    Choose your kernel version, patch set, etc. No defaults. I guess he has never actually installed gentoo himself. The author should get a clue about the distro's he's talking about before making clames about their security.

  • by cybrthng ( 22291 ) on Friday March 18, 2005 @12:25PM (#11976234) Homepage Journal
    What exactly was that point? I can "forkbomb" my car by filling it up with too much crap, i can "forkbomb" my airplane by exceeding it's design limitations.

    I guess my point is certain limits are by design and its up to operator training and guidance on how to work with those limits to make sure they don't exceed them.
  • by olympus_coder ( 471587 ) * on Friday March 18, 2005 @12:28PM (#11976277) Homepage
    Security is a balance between making a computer immune to attacks and providing capabilities.

    I run a several labs at a university. I don't even bother to lock the linux side of the machines down much past base install. My users have never tried to cause problems. I don't even use quota.

    If someone ever does cause a problem, I'll take the lab down (cause a pretty good backlash from their fellow grad students) and fix it.

    In the mean time, I like the fact that when someone ask me "how much of X can I use" I say, as much as you need so long as it doesn't cause a problem. I'm never going to get mad if they run a large job, etc that slows the machine down. I can always kill it, and ask them to run it on one of the dedicated computers.

    Point is, why limit something that is only an issue if you are working against your users, instead of for them? In 99% of the installs that is the way it is (or should be).
  • by gosand ( 234100 ) on Friday March 18, 2005 @12:31PM (#11976308)
    I was working at my first job, and we had 10 people to a Sun server. I was writing shell scripts, and wrote one called fubar that was this:

    while [1] do
    fubar &
    done

    Then I did chmod +x fubar, and typed "fubar &"

    oops.
    The system load started climbing and everyone else on the machine started bitching. I thought it would crash, and went over to the local admin and fessed up. Of course, we were all interested in what would happen. Nobody could get in to kill it, and the processes were spawning so fast that we couldn't catch it. It was taking forever just to log int. But the load leveled off, and it wasn't going to crash. The admin was going to reboot it, but then I said "wait a second!" I went back to my window that was open. Know what I typed?

    rm -f fubar

    I suppose you could make it more nasty by making the file name create a copy of itself and name it the process id, so that you wouldn't be able to rename it.

    cp $0 .$$
    ./.$$ &

    This will leave all the process files laying around, but you could code something to remove them. But this gets the point across.

  • by peterpi ( 585134 ) on Friday March 18, 2005 @12:51PM (#11976591)
    I just locked myself out of my Debian sarge machine with the following:

    (forkbomb.sh)

    #!/bin/bash
    while true; do
    ./forkbomb.sh
    done

  • by wsanders ( 114993 ) on Friday March 18, 2005 @01:45PM (#11977214) Homepage
    In the past I've been awakened in the dead of night by my IDS'es detecting a fork bomb and it's always been self inflicted - some dumbass^H^H^H^H^H^H^Hmisinformed programmer not understanding why it's important to check the return status of a fork() or set alarms to kill off hung sockets.

    I found some earlier kernels ignored RLIMIT_CPU, RLIMIT_RSS, and RLIMIT_NPROC, and setting the CPU and RSS limits in Apache was ineffective. This was in the Red Hat 9 / 2.4.20 kernel days. I have not researched this in a year or so. If all this stuff works now, let me know so I can insert a "ulimit -Hu 10" in future startup scripts as a "courtesy" to inattentive programmers.
  • quick perl fork (Score:4, Informative)

    by dougnaka ( 631080 ) * on Friday March 18, 2005 @03:01PM (#11978055) Homepage Journal
    this one liner was my bane in ctf at defcon 10..

    perl -e 'while(1){fork();}'

    Course we were running VMware, initially with their very insecure RedHat 5.2 I think it was..

    Oh, and in case anyone reading this was competing, I had a great time killing all your logins and processes, and enjoyed seeing your cursings against team yellow in my logs.. but the perl thing, along with a very small team, took us out completely..

  • by fatboy ( 6851 ) * on Friday March 18, 2005 @04:40PM (#11979226)
    OMG, rm -rf / still works as root too!!

    Why is Linux still vulnerable by default?

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...