Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Software Linux

Some Linux Distros Found Vulnerable By Default 541

TuringTest writes "Security Focus carries an article about a security compromise found on several major distros due to bad default settings in the Linux kernel. 'It's a sad day when an ancient fork bomb attack can still take down most of the latest Linux distributions', says the writer. The attack was performed by spawning lots of processes from a normal user shell. Is interesting to note that Debian was not among the distros that fell to the attack. The writer also praises the OpenBSD policy of Secure by Default."
This discussion has been archived. No new comments can be posted.

Some Linux Distros Found Vulnerable By Default

Comments Filter:
  • How long? (Score:0, Insightful)

    by Anonymous Coward on Friday March 18, 2005 @11:53AM (#11975848)
    Let's see how long it will take before someone says the study is invalid...
  • by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @11:53AM (#11975853) Homepage Journal
    Sorry but the ability for a non-privileged user to run as many programs as the like is a feature, not a bug. Inability to turn that feature off would be a bug, but given that few modern Linux boxes are actually used as multi-user remote-login accounts, it's a completely unecessary overhead.

    And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).
  • by woginuk ( 866628 ) on Friday March 18, 2005 @11:54AM (#11975864)
    That is no reason why the same should be true of Linux. Or any other OS for that matter.
  • Yawn. (Score:3, Insightful)

    by BJH ( 11355 ) on Friday March 18, 2005 @11:54AM (#11975869)
    So what? Anybody in their right mind would have locked down their box if they're letting third parties access it remotely.

    Running around screaming "FORKBOMB! FORKBOMB! The sky's falling in!" seems to be a common pattern every few years. If you know what you're doing, it's trivial to prevent and if you don't know what you're doing, why are you running a public box?
  • by argent ( 18001 ) <peter@slashdot.2 ... m ['.ta' in gap]> on Friday March 18, 2005 @11:56AM (#11975891) Homepage Journal
    A forkbomb is just a relatively simplistic way to mount a resource exhaustion attack. I would be extremely wary of anyone who claims that their UNIX class operating system is immune to resource exhaustion from a local user. There's just too many resources that can be commandeered, and to lock them all down would leave you with a system that's so restricted as to be nearly useless as a general computing platform.

    It must be a slow day on /. if they're reporting this as news.
  • by Anonymous Coward on Friday March 18, 2005 @11:57AM (#11975898)
    And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).

    Hope you're not administrating any multi-user Linux boxes then, since in Linux, the quotas only deal with drive space ;)
  • by oscartheduck ( 866357 ) on Friday March 18, 2005 @11:58AM (#11975908)
    No, I understand the article. I just couldn't resist the jab. The fact is that GNU/Linux ought to be the best it can be in and of itself. That some distributions are screwing that up and making very poor defaults is not to be forgiven. Not at all. Especially when it isn't difficult to do better.
  • by n0dalus ( 807994 ) on Friday March 18, 2005 @11:58AM (#11975915) Journal
    On the 3 distros listed as vulnerable, the default settings would stop any remote person from having a chance of getting a shell open on the box to perform the fork attack in the first place.
    If a person has enough access to the machine to be able to "forkbomb" it, then there's plenty of other nasty things you could do to it.
  • Re:Yawn. (Score:1, Insightful)

    by Anonymous Coward on Friday March 18, 2005 @11:59AM (#11975931)
    So what? Anybody in their right mind would have locked down their box if they're letting third parties access it remotely.

    Interesting that your "solution" is perfectly OK when applied to Linux, but is entirely unacceptable for the SAME problem with Windows.

  • Wrong attitude. (Score:5, Insightful)

    by Anonymous Coward on Friday March 18, 2005 @12:00PM (#11975942)
    All my servers have multiple users. Those users are system accounts to run different software, and I do not want any of them to be able to cause a problem to the entire server. Reasonable limits should be in place by default, and those of us who actually need higher limits for certain users, can raise those limits.

    Even on a single user desktop machine, its nice to have limits so shitty software can't take down my entire machine. With limits I can just log in on another terminal and kill the offending program, without limits you get to reboot, and lose any work you were doing.
  • by drsmack1 ( 698392 ) * on Friday March 18, 2005 @12:02PM (#11975967)
    Looks like everyone out there on slashdot think this is not really a problem. Remember when it was discovered that you could get into a xp installation locally with a win 2000 boot cd? Oh, the howling that was heard.

    Here is a issue that can be done remotely with only a user account.
  • Are you on drugs? (Score:2, Insightful)

    by Anonymous Coward on Friday March 18, 2005 @12:04PM (#11975995)
    Why is the word on in quotes? Yes, ftpd is part of the system. No, it is not running. No, it is not ready for exploit since as mentioned, its not running, and also, what vulnerabilities does it have? That likes saying openbsd is bad because it ships with popa3d. Its right there waiting to be exploited, if you are root, and start it up, and someone finds an exploit for it.
  • by CaymanIslandCarpedie ( 868408 ) on Friday March 18, 2005 @12:05PM (#11976003) Journal
    Sounds much like the same reasoning MS used to use for having defaults set to a "user-friendly" setting.

    Now that its been found in Linux, its a "feature" ;-)

    Come on, I love Linux but the hypocrocy is a bit much ;-) Its OK to admit it was bad or admit MS's settings were OK, but you cannot do both.
  • by woginuk ( 866628 ) on Friday March 18, 2005 @12:07PM (#11976020)
    I don't know about how it works nowadays. But when I was new to UNIX, I would write the following program:
    int main() {
    while(1)
    fork() ;

    return 0 ;
    }
    Compiling and running it would hang the box. You could ping the system, but nothing else would would work.

    Ultimately, I would have to switch the box off and on again. And I remember thinking that this was a bug.

    A user should be allowed to do whatever he/she wants. But if the system becomes unusable, surely it is a bug.
  • by aendeuryu ( 844048 ) on Friday March 18, 2005 @12:07PM (#11976021)
    It's funny, isn't it, that on the same day we have a story about Linux distros being insecure by default, EXCEPT Debian, we have another story where Debian is being criticized for not releasing updates more often.

    Maybe, and here's a thought, just maybe, it's wise to take a decent, stable distro and perfect it, instead of taking a distro and submerging it in a state of perpetual flux with constant updates.

    Just a thought. I might be biased because it's a Debian-based distro that finally put a working Linux on my laptop. But you know what? Every now and then the bias is there for a reason...
  • Re:Retarded (Score:2, Insightful)

    by 0123456 ( 636235 ) on Friday March 18, 2005 @12:07PM (#11976027)
    "It was a report on how a fork bomb can take down default Linux installs,"

    Yes, and? I don't care about fork bombs, since I don't run them on my PC... being able to run as many processes as I choose on that PC is a feature, not a flaw. I do care about having scumware remotely installed on my PC through security holes in applications and the operating system, which is a flaw, not a feature.

    Seriously, if you're letting people log onto your PC and run fork bombs, you have far greater problems than a lack of resource limits in the default install.
  • Silly exploit (Score:5, Insightful)

    by SmallFurryCreature ( 593017 ) on Friday March 18, 2005 @12:08PM (#11976031) Journal
    As others have already commented this has little to do with security.

    Most linux systems are used as desktops, if you use them as a server you don't use the defaults. Now a user being able to crash his own system is nothing new. It ain't nice but as long as it is the user doing it then no problem. Now if this fork could be used to make apache explode and bring down the system THAT would be a boo boo.

    Ideally yes the system should not do things that bring it coming crashing down but this is close to blaming a car for allowing me to plow into a wall. Not sure if I want a car/computer telling me what I can and cannot do.

    As to how to set the limits on the number of forks. Maybe I got this completly wrong but could it be that this depends entirely on your hardware? Perhaps the latest IBM mainframe can handle a few more then an ancient 386? How the hell is the distro supposed to know what I got?

    Security is other people doing stuff on my computer that I don't want and or know about. Me screwing stuff up is my business.

    BSD is very solid, this is known. It is also known that BSD has been along long before linux and but has been sucking it exhaust fumes ever since it arrived. For every story about how much more secure BSD is there are a dozen stories about linux actually making a mark on the world. So good. Your BSD survived a forkbomb. But why exactly was the author running a linux desktop then if BSD is so much better?

    Another non-story on /. Is the internet going to the way of tv?

  • Re:Wrong attitude. (Score:3, Insightful)

    by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @12:08PM (#11976033) Homepage Journal
    Reasonable limits should be in place by default,
    But given that distribution/kernel vendors do not have the first idea of
    i) My hardware
    ii) How many users I want
    iii) What programs / services will be running,

    how in the name of crikey are they supposed to determine what a "Reasonable limit" would be?
  • by mOdQuArK! ( 87332 ) on Friday March 18, 2005 @12:10PM (#11976059)
    I would be extremely wary of anyone who claims that their UNIX class operating system is immune to resource exhaustion from a local user.

    Eh? Most modern UNIX systems let you put some hard limits on all the collective ways that users can consume resources, including # processes, disks space, real/virtual memory, cpu time, etc. Any administrator who is responsible for a multi-user system should have those set to "reasonable" values, and no individual user (except for the administrator of course) would be able to bring down the system.

    What kind of resource are you thinking of that any user can exhaust which would stop the system (through resource exhaustion)? Log file messages?

  • by Anonymous Coward on Friday March 18, 2005 @12:10PM (#11976066)
    You can limit users to using less than 100% of your resources, and those users can still do things. Its still a very usable system. I have this even on my laptop where I am the only user, so poorly written software or random mistakes don't result in me having to reboot my machine. Just the other day I messed up a pike script and it used up all my RAM. But my ulimit was set to 128MB of RAM, so pike just got an out of memory error and exited. Without ulimit it would have sucked up all my RAM and swap and I would have to reboot.

    This kind of uninformed and ignorant attitude seems quite common in the linux world now that most users aren't experiences unix admins. It would be a good idea to learn about something before claiming to know how it should be setup.
  • by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @12:11PM (#11976073) Homepage Journal
    I love Linux but the hypocrocy is a bit much. Its OK to admit it was bad or admit MS's settings were OK, but you cannot do both.
    There's no hypocrisy, at least not from me. I've never criticised MS software because local users are able to fill up disk space, spawn a zillion processes, suck the processor and generally exhaust the resources... That's what users do, they use up your resources.
  • by Jeff DeMaagd ( 2015 ) on Friday March 18, 2005 @12:13PM (#11976104) Homepage Journal
    The biggest security problems come from the inside. Other employees can't be trusted just because they work for the same company.
  • Re:Wrong attitude. (Score:2, Insightful)

    by Anonymous Coward on Friday March 18, 2005 @12:15PM (#11976125)
    They aren't, you are. They are supposed to make a reasonable default, not a reasonable limit. You look at the reasonable defaults, and decide to change them on a per user basis for the different users (mysql, apache, etc). This is how unix has always been, linux distros taking huge steps backwards isn't something to be accepted.

    Lots of distros ask wether you are running a server or desktop, is it really so tough to ask if that server is dedicated to one task or many jobs, and set sane defaults based on your answer? Then say if you know you want mysql to be able to use more RAM, you can let the mysql user have a higher limit. But you don't have to worry about any of your services bringing your machine down.
  • by Anonymous Coward on Friday March 18, 2005 @12:19PM (#11976167)
    You can always allocate a big chunk of memory and write one byte to each page. The machine will be swapping so hard it would take an hour to log in and figure out which job is the problem. Yes, it doesn't "crash" the system, but unusable is the same as down in my book.

    The grandparent post is right. Absolute security is absolutely unusable. You need to trust your users a little so they can do the work they need to do. If they step over the line most want to know how to avoid doing it again.

  • by kfg ( 145172 ) on Friday March 18, 2005 @12:20PM (#11976179)
    Sorry but the ability for a non-privileged user to run as many programs as the like is a feature, not a bug.

    Sorry, but the ability of a mail reader to automagically run as many programs as it likes is a feature, not a bug.

    The point being that while this may, in some rare cases, be desirable, it shouldn't be the default setting, but rather something that the adminstrator has to enable for that rare user for which it is deemed both necessary and desirable.

    "Able" and "capable of" are not the same thing.

    It shouldn't be the responsibility of the admin to turn on every possible security feature, but rather to turn off only those ones he deems gets in the way of the functioning of his system.

    It's exactly this lacadasical approach to security that has made Windows the hell hole that it is. It certainly puts money in my pocket trying to fix it all, over and over and over again, but I'd far rather be spending my time and earning my money doing something useful.

    Like computing.

    KFG
  • by cybrthng ( 22291 ) on Friday March 18, 2005 @12:25PM (#11976234) Homepage Journal
    What exactly was that point? I can "forkbomb" my car by filling it up with too much crap, i can "forkbomb" my airplane by exceeding it's design limitations.

    I guess my point is certain limits are by design and its up to operator training and guidance on how to work with those limits to make sure they don't exceed them.
  • by shish ( 588640 ) on Friday March 18, 2005 @12:25PM (#11976245) Homepage
    Here is a issue that can be done remotely with only a user account.

    This is a fork bomb (a DoS technique), not 100% access. With this, all your secret files remain safe.

  • by Vihai ( 668734 ) on Friday March 18, 2005 @12:27PM (#11976258) Homepage
    The TCP/IP stack, recently vulnerable again to the LAND attack is not part of the kernel?
  • by olympus_coder ( 471587 ) * on Friday March 18, 2005 @12:28PM (#11976277) Homepage
    Security is a balance between making a computer immune to attacks and providing capabilities.

    I run a several labs at a university. I don't even bother to lock the linux side of the machines down much past base install. My users have never tried to cause problems. I don't even use quota.

    If someone ever does cause a problem, I'll take the lab down (cause a pretty good backlash from their fellow grad students) and fix it.

    In the mean time, I like the fact that when someone ask me "how much of X can I use" I say, as much as you need so long as it doesn't cause a problem. I'm never going to get mad if they run a large job, etc that slows the machine down. I can always kill it, and ask them to run it on one of the dedicated computers.

    Point is, why limit something that is only an issue if you are working against your users, instead of for them? In 99% of the installs that is the way it is (or should be).
  • by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @12:30PM (#11976298) Homepage Journal
    You should not allow unprivilged users to fork as many processes as they want.
    You should not allow untrusted users to fork as many processes as they want. However, very few boxes have untrusted users. If yours does, the command you need is ulimit
  • by mOdQuArK! ( 87332 ) on Friday March 18, 2005 @12:35PM (#11976381)
    You can always allocate a big chunk of memory and write one byte to each page.

    That would be caught by the limits on virtual memory usage. As I said, what resource are you thinking of that a decent system administrator couldn't limit to prevent a normal user from exhausting resources?

  • Re:Retarded (Score:5, Insightful)

    by Alioth ( 221270 ) <no@spam> on Friday March 18, 2005 @12:38PM (#11976415) Journal
    *Any* local exploit is *also* a potential remote exploit (just like the IRC conversation shows). I had someone nearly pwn a box of mine by using an exploit in a buggy PHP script, then trying to elevate privileges through a local exploit.

    Had I not considered local exploits important, I'd have had one nicely hacked box.
  • by Mr. Underbridge ( 666784 ) on Friday March 18, 2005 @12:40PM (#11976465)
    Sorry but the ability for a non-privileged user to run as many programs as the like is a feature, not a bug. Inability to turn that feature off would be a bug, but given that few modern Linux boxes are actually used as multi-user remote-login accounts, it's a completely unecessary overhead.

    Right, it's a feature, but the question isn't whether it should ever be allowed, but what the default setting should be. I think the article made a pretty good case that default should be no.

    And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).

    First, I think a lot of unix people would be shocked to find that's on by default as the writer was. Second, that basically means that anyone who successfully hacks into a user account takes the machine down. That applies for your desktop machine, not just "old-style" unix type servers. Third, you mention the relative scarcity of old style servers these days - they're still more common than a user who needs to run an INFINITE number of programs. Even capping somewhere in the thousands would work, keeping anyone from being hampered in their work.

    Basically, this is a case of idea vs. reality. You want the IDEA that you can run as many programs as you want, though you'll never need to. So in REALITY, a sane cap never hurts you. However, a lack of a cap provides very REAL security problems, either from a user or from someone who manages to hack a user account. Again, you really don't want EVERY userland exploit to lead to a kernel takedown, do you?

  • by slavemowgli ( 585321 ) * on Friday March 18, 2005 @12:44PM (#11976508) Homepage
    Anything that requires you to have a valid user account on the machine you want to attack is, by definition, not remote.
  • by DemENtoR ( 582030 ) on Friday March 18, 2005 @12:49PM (#11976575)
    You'll get OMM killed faster, than you can read the line "welcome to linux".
  • Re:Retarded (Score:5, Insightful)

    by phasm42 ( 588479 ) on Friday March 18, 2005 @12:56PM (#11976660)
    Two things: 1. Just because you don't care doesn't mean other people won't care. A lot of people (especially in a business environment) do have more than one person logging. 2. The article is trying to point out something that Linux installs could improve on. That is all.
  • Re:Retarded (Score:2, Insightful)

    by compass46 ( 259596 ) on Friday March 18, 2005 @12:58PM (#11976683)

    What about a poorly written program run by you with an unintentional bug that ends up causing this? What about a remote exploit which allows abitrary code execution? Do you inspect every line of code in programs that you use to make sure this isn't going to happen?

    User land programs should not be able to bing an entire system to it's knees. Come on, when I first started with Linux, Slashdotters made fun of Win98 because a single program could crush the machine.

  • Re:Wrong attitude. (Score:2, Insightful)

    by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @01:00PM (#11976696) Homepage Journal
    What makes you think any given distro wouldn't set _reasonable_ default limits?
    Because what's "reasonable" varies from person to person, machine to machine and usage patterns.
    How many processes do you really need?
    I've no idea. The trouble is, you've no idea how many processes I need, either. And neither do Bill Gates, Linus Torvalds, Bruce Perens, the Debian process-ulimit-policy-discuss mailing list, the Grinch or Eric Bakke. That makes it pretty hard to set a "sane" limit that works for everyone
    How much memory?
    All of it, and sometimes a little bit more.
    I mean, what would you rather have: a machine that's vulnerable by default to resource exhaustion, or a machine that's _not_ vulnerable by default.
    If only it were that simple. If I'm running a Linux mail-server on a 386/66 with 8MB RAM then its not going to take a lot of processes to down the box. However, if I'm running a busy web server on a Dual Xeon box, then Apache is going to want to spawn more processes than would kill my mail server...

    So, if you protect my box from resource exhaustion with a defaulty hard limit, one of two things will happen.

    i) If the limit is low, my web server won't function very well, as it won't be allowed to spawn processes.
    ii) If the limit is high, y mail server could be killed by a fork bomb before the limits get hit.

    Simply put, you're treating process limits as a panacea, when it isn't.
  • by NewStarRising ( 580196 ) <NSRNO@SPAMmaddwarf.co.uk> on Friday March 18, 2005 @01:09PM (#11976796) Homepage
    "A user should be allowed to do whatever he/she wants"

    Then why not let them run as root?
    I thought the idea of User Accounts was to limit what they could do, at least as far as system-files, machine-security, inhibiting other users, etc goes.

    No, I don;t consider this a 'bug'. I consider it an "inappropriate default setting".
  • by Stephen Samuel ( 106962 ) <samuel@bcgre e n . com> on Friday March 18, 2005 @01:14PM (#11976850) Homepage Journal
    Sorry but the ability for a non-privileged user to run as many programs as the like is a feature, not a bug.

    Just how many regular users expect to run 20000 processes at once? (or even 200?) When that happens it's almost always caused by a bug (or malicious activity). Right now, I have 50 user processes running. I'm a power user, but I'd probably never get blocked by a limit of 1000 unless I was doing something really wierd -- and something that weird should come with instructions on modifying the kernel settings.

    Yes, it should always remain possible to set up your system so that you can run massive numbers of processes and/or threads, but the default should be to keep numbers to a dull roar in favour of system stability. People whose needs are such that they actually and legitimately want to fork massive numbers of processes are also the kinds of people who wouldn't have a hard time figuring out how to change the kernel settings to allow it.

    As such, the default should err on the side of security, but allow a knowledgable user to do whatever the heck he wants.

    Thing is, though, that local resource-exhaustion exploits are difficult to set. You want to allow a user felatively free reign -- even to the point of stressing the system, but still allow enough of a reserve so that an admin cam login and shut down a user who'se gone overboard. You also want to set a limit that will be reasonable 5 years down the road when processors are 10 times as fast (and/or 20-way SMP is standard issue)

    Something to note here in Linux's favour: Even though the forkbomb brought the system to it's knees it stayed up. Although it might have taken 1/2hour to do, an admin may have been actually able to login and kill the offending process.

  • Re:Retarded (Score:3, Insightful)

    by Yaztromo ( 655250 ) on Friday March 18, 2005 @01:21PM (#11976939) Homepage Journal
    Yes, and? I don't care about fork bombs, since I don't run them on my PC... being able to run as many processes as I choose on that PC is a feature, not a flaw.

    A fork bomb doesn't necessarily have to be due to a purposeful attack. A software bug can easily cause a fork bomb by going into an endless loop launching new processes. Should this take your system completely down?

    Admittedly, you probably shouldn't bee seeing such a serious bug in any release software. But what if you're a developer, and have code that is forking in a loop which, due to logic problems, never exits? Should you be forced to reboot your system?

    I can think of a lot of such instances where a runaway process might start forking -- and personally, I'd prefer to be able to kill the process instead of being forced to reboot. I doubt if I'm ever going to purposefully spawn 5000 simultaneous processes. I think you'd have a valid complaint if the OS was limiting you to 50 processes, but there is a realistic upper limit of the number of processes a given hardware configuration is going to be able to reliably handle -- why shouldn't the kernel prevent the system from surpassing such a limit? Do you really want to be able to open enough processes to kill your system so badly the only way out is a reboot?

    Yaz.

  • by theMidniteMinstrel ( 866721 ) on Friday March 18, 2005 @01:22PM (#11976956) Homepage
    The moral of the story is that any system is only as good as the system administrator makes it. If you realize that you've got this problem on a mission-critical system, or even a web-server that sees heavy traffic, that administrator deserves to be fired. Come on, people, let's get over this "out of the box security" metric. It's worthless.
  • by Paradox ( 13555 ) on Friday March 18, 2005 @01:25PM (#11976988) Homepage Journal
    If you follow the link in the article to the original entry from security focus, [securityfocus.com] you'd see that malicuous remote user comprimised a machine that was patched up to current.

    Seriously, if you're letting people log onto your PC and run fork bombs, you have far greater problems than a lack of resource limits in the default install.


    Look, you seriously misunderstand something here. Run a server long enough and it gets very likely that even with the latest patches, you will get attacked. If someone breaks into your box, exactly how much power do you want them to have?

    The ability to bring the machine to a screeching halt with an attack that dates back to the Land Before Time is not a feature! It is a security hole and it's every bit as important to fix as your exterally visible holes.

    Because, one of these days some cracker is going to get the drop on your box. You'd better hope your box is ready for that.
  • by Anonymous Coward on Friday March 18, 2005 @01:27PM (#11977010)
    We aren't saying that default limits will be perfect for everyone. We are saying that its better to have to raise your limits IF YOU NEED TO, then to have your machine vulnerable to being completely taken down trivially, very possibly by remote users with no accounts, just from making your services work harder than you expected.

    If you are running a server than needs hundreds of apache processes running, then you know that and can raise it. Someone who is new to linux won't need that, and won't know how to setup limits for themselves. So you make the machine secure by default, and allowed advanced users with advanced needs to tweak things as they need.

    The best thing I can think of to illustrate the point to you is your apache example. By default apache won't let you have more than 150 users connected. This is a sane default to protect from resource exhaustion. If you need more than that, you can set it yourself. People have some protection by default, but advanced users can customize the settings for their needs.

    I cannot believe in 2005 I am arguing with someone who thinks secure by default is a bad idea because it might invonvenience you.
  • by gowen ( 141411 ) <gwowen@gmail.com> on Friday March 18, 2005 @01:35PM (#11977079) Homepage Journal
    then to have your machine vulnerable to being completely taken down trivially,
    There is nothing trivial about obtaining a user account on my machine. If someone you don't trust can run a forkbomb on your machine, you've already been Pwned, and it's too late to apply a band aid.
  • by kfg ( 145172 ) on Friday March 18, 2005 @01:38PM (#11977117)
    There's a word for admins who trust the users:

    "Fired."

    I don't even trust my own mother on her own network.She loves me for it. Her system stays up.

    KFG
  • by binand ( 442977 ) on Friday March 18, 2005 @01:46PM (#11977220)
    OpenBSD does not install wu-ftpd [wuftpd.org]. Instead, they have their own ftpd, called ftpd-BSD (which has been ported to Linux as well, look it up on rpmfind or something). It is wu-ftpd that has a reputation of being buggy.
  • -5 Advocacy (Score:3, Insightful)

    by ajs ( 35943 ) <{ajs} {at} {ajs.com}> on Friday March 18, 2005 @01:46PM (#11977222) Homepage Journal
    Honestly, when will Slashdot stop posting trolls as stories. This is a clear case of "BSD is better than Linux because feature X has a DEFAULT that BSD people think is wrong". There's no security implication in the sense that most people would think of it (no remote root exploit, no remote exploit, no remote priv escalation, no remote DoS, no local root exploit, and no local priv escalation... just a local DoS... if that's what you were looking for, I can show you some OpenGL code you can throw at the display that will render your bus unusable on any display-acceleration capable graphics card regardless of OS).

    Please, just stop posting this cruft as "news for nerds,"; it's not news and any self-respecting nerd knows bad advocacy when he/she sees it. I like BSD. I was a major BSD guy back in the day, from BSD 4.2 to Ultrix up to SunOS 4.x. BSD is great.

    Linux is also quite nice.

    They both have a ton of great features and a ton of other annoying features / arguably bugs. Linux has more features and more bugs because more people contribute to it, but that's both a blessing and a curse for the BSDs.

    Once agian, carry on. Nothing to see here.
  • by CustomDesigned ( 250089 ) <stuart@gathman.org> on Friday March 18, 2005 @01:48PM (#11977243) Homepage Journal
    $ grep foo /dev/zero
    grep: memory exhausted
    $ ulimit -a
    time(cpu-seconds) unlimited
    file(blocks) unlimited
    coredump(blocks) 0
    data(kbytes) unlimited
    stack(kbytes) 8192
    lockedmem(kbytes) unlimited
    memory(kbytes) 81920
    nofiles(descriptors) 1024
    processes 3071
    The default for memory is unlimited, which does indeed create a DOS "attack" for "grep bomb" and other inadvertant application bugs.

    This is a case of bad defaults, not a kernel problem. I recommend a max physical memory of no more than 1/4 physical memory, preferrably less.

    AIX also has cruddy defaults, but ulimit -m limits physical RAM, not virtual RAM. That way, a single process with run-away memory use will just start swapping like crazy and let the rest of the system keep running. Of course, even then a dozen or so such processes will still bring the system to a crawl. I would like to see physical RAM limited by user id.

  • Re:Yawn. (Score:4, Insightful)

    by Taxman415a ( 863020 ) on Friday March 18, 2005 @01:53PM (#11977281) Homepage Journal
    This level of fanboyness is unbelievable. Well actually I should not be surprised, blindness to linux's faults is endemic here on Slashdot.

    The author is not "Running around screaming...", he is simply very surprised that a local user can exhaust a system so easily. Maybe every single admin should think of all n possible security problems every single time they take a box live, but people are human.

    Which one is worse: limits in place by default so that an admin needs to know how to raise them when necessary and the forkbomb would not work, or no limits in place and having to know to set them or else the box can be brought to its knees? Secure by default or free for all.

    I suppose you think every default install should also have telnetd enabled by default because any admin with half a brain should know how to turn that off? Point is admins are fallible, the default should be the lower total risk/cost option. I think which one that is is clear here.
  • by Anonymous Coward on Friday March 18, 2005 @02:10PM (#11977449)
    With Windows XP Starter Edition, trojans run you!
  • Re:Silly exploit (Score:3, Insightful)

    by ArbitraryConstant ( 763964 ) on Friday March 18, 2005 @02:11PM (#11977452) Homepage
    "As to how to set the limits on the number of forks. Maybe I got this completly wrong but could it be that this depends entirely on your hardware? Perhaps the latest IBM mainframe can handle a few more then an ancient 386? How the hell is the distro supposed to know what I got?"

    man 2 setrlimit
    "RLIMIT_NPROC
    The maximum number of processes that can be created for the real
    user ID of the calling process. Upon encountering this limit,
    fork() fails with the error EAGAIN."
    It's part of POSIX. It should work the same on any *nix on any hardware.

    "BSD is very solid, this is known. It is also known that BSD has been along long before linux and but has been sucking it exhaust fumes ever since it arrived. For every story about how much more secure BSD is there are a dozen stories about linux actually making a mark on the world. So good. Your BSD survived a forkbomb. But why exactly was the author running a linux desktop then if BSD is so much better?"

    You're clearly unaware of the various marks BSD has made. Essentially every OS out there today runs BSD code, including Linux and even Windows, and the BSDs continue to break new ground in ways that are frankly too numerous to go over here.

    Also, most of the "difference" Linux has made is actually software that will run on BSD as well.
  • Re:Wrong attitude. (Score:2, Insightful)

    by harrkev ( 623093 ) <kevin@harrelson.gmail@com> on Friday March 18, 2005 @02:23PM (#11977590) Homepage
    The trouble is, you've no idea how many processes I need, either. And neither do Bill Gates, Linus Torvalds, Bruce Perens, the Debian process-ulimit-policy-discuss mailing list, the Grinch or Eric Bakke. That makes it pretty hard to set a "sane" limit that works for everyone
    First of all, let's separate the world in to two different type of people: normal users, and power users. Normal users will run a web browser, office applications, an e-mail client, etc. Power users are the ones who always need more resources.

    Limits should be set substantially higher than what a normal user will ever need. A normal user should never even bump into the limits.

    Now, if you ARE a power user, then you should know how to change those limits. This gives you the best of all possible worlds. You get protection, but you can CHANGE THE DEFAULTS if you need to! Simple. If you are setting up apache, you should know enough about it to set reasonable limits.

    Let me put it another way. What would you think if I said "ZoneAlarm doesn't know what ports I need open. It should open up ALL ports by default. I will shut down the ones that I don't need." That would be a pretty useless firewall.
  • history (Score:1, Insightful)

    by Anonymous Coward on Friday March 18, 2005 @02:51PM (#11977939)
    All the people saying "so what" and arguing agains OS limits is not surprising. And that these people are the same as will gloat about their Linux staying up when Windows blue screens is to be expected from the usual /. suspects.

    But what those of us who have been around for some time now are amazed by, is we had this exact same discussion back pre-1990, and we thought it had been settled then!

    Not only have people not learned since then, we are going in retro-grade cycles, and revisiting the f**ing stuff we thought had been fixed.

    What do I know? I only run almost no desktops, only servers, and I want limits on by default; and I want back all the the lost time in my life that I've spent turning crap off and tightening up after I've installed.

    Every vendor Unix sucks, and Linuxes are now vendor Unixes, and they all suck for the same old old old reasons. And it appears every generation we have to fight the same battles all over again, against the same morons. At least the BSDs have clue, and actually show improvement over time.

  • by Anonymous Coward on Friday March 18, 2005 @03:22PM (#11978325)
    Except this isn't a fault. "Safety researchers discovered that cars today can exceed the speed limit of 65 MPH. This is a serious breach of safety protocols that can potentially cause serious bodily injury and possibly death."

    I mean, really, "Linux lets me shoot myself in the foot". Good to know. DON'T SHOOT YOURSELF IN THE FOOT.
  • Oh, but wait.... (Score:2, Insightful)

    by Marthisdil ( 606679 ) <marthisdil@[ ]mail.com ['hot' in gap]> on Friday March 18, 2005 @03:34PM (#11978476)
    The Linux fanatics are trying to say that Linux is ready for the desktop and home use....Yeah, like some home user is gonna be able to squash security issues like these on their own.

    EXACTLY why it'll be a very long time before users will be comfortable.

    But wait....Linux is more secure than Windows....riiiiiiight..../sarcasm off
  • by Mad Merlin ( 837387 ) on Friday March 18, 2005 @04:02PM (#11978787) Homepage
    You can set hard limits on the amount of RAM a user may consume, in addition to how many processes they can spawn, as well as a number of other useful things with a trivial amount of effort in Linux, have a look at /etc/security/limits.conf.
  • Re:Yawn. (Score:3, Insightful)

    by RedBear ( 207369 ) <redbear.redbearnet@com> on Friday March 18, 2005 @05:06PM (#11979521) Homepage
    Stability is supposed to be one of the selling points of Linux. What exactly is the benefit in having a default setting that allows any user on any Linux desktop computer to lock up their machine just by starting up some application or script that consumes too many resources? Thanks but no thanks. I want both my server and my personal desktop computer to be able to recover from such things without a hard reboot. Whether they are malicious or just a mistake in some shell script doesn't matter. If it takes down the box, it's bad, and should be disabled by default.

    Nobody in their right mind should be defending a lack of stability in Linux. Those who want to push the box to its absolute limits (and risk a hard lockup) can find out how to activate the options that allow this. It should NEVER be the default to have options active that compromise the stability of a machine so easily. Desktops are just as important as servers in this regard, as far as I'm concerned.

    Linux is supposed to be better than Windows, remember? Raising a valid objection to a default setting capable of causing (or allowing) real-world problems on everyday computer systems is not "running around screaming".

    Oh, and by the way, not everyone on Earth is a trained Linux system administrator, and being a Linux admin isn't a prerequisite to being in one's right mind. There are plenty of perfectly intelligent people who have no idea how to "lock down their box" and never will, just like most people have no idea how to break down a carburator or build a nuclear reactor. Some people have other things to do. If Linux distros want to market to the general public, they should be safe to use by the general public. We're talking about DEFAULTS here that are being used by every person running Linux in general. Not just servers run by Linux gurus. Your statement comes across like saying people who aren't expert car mechanics shouldn't be allowed on public roadways. In other words, a little elitist, don't you think?

  • by dbIII ( 701233 ) on Friday March 18, 2005 @08:34PM (#11981241)
    I do not trust everyone with access to a win2000 boot CD.
    You can even get into a headless solaris box with an install CD and a serial terminal. You can then blank the root password and boot the system normally - you then have full access. Any machine that people have full physical access to is vunerable to those people - the most locked down box is still vunerable to someone pulling out the drive with its root partition, mounting it, and editing password files.

    Technology is no substitute for physical security, as Kevin Kline's character in "Fish Called Wanda" showed by getting a gun past a metal detector in the common airport security situation of watching the luggage and not the person.

    If you can boot a machine with your own media or pull it apart and no-one notices, then you can have full control - whether it is win2k or whatever.

    As for network security, virtual machines with quotas have to be the way to go so long as there are loonies with root passwords that enable telnet, install a compiler and use "coffee" as a password.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...