Some Linux Distros Found Vulnerable By Default 541
TuringTest writes "Security Focus carries an article about a security compromise found on several major distros due to bad default settings in the Linux kernel. 'It's a sad day when an ancient fork bomb attack can still take down most of the latest Linux distributions', says the writer. The attack was performed by spawning lots of processes from a normal user shell. Is interesting to note that Debian was not among the distros that fell to the attack. The writer also praises the OpenBSD policy of Secure by Default."
How long? (Score:0, Insightful)
Sheesh, it's a fork bomb (Score:4, Insightful)
And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).
Re:In other news... (Score:2, Insightful)
Yawn. (Score:3, Insightful)
Running around screaming "FORKBOMB! FORKBOMB! The sky's falling in!" seems to be a common pattern every few years. If you know what you're doing, it's trivial to prevent and if you don't know what you're doing, why are you running a public box?
Not a vulnerability. (Score:5, Insightful)
It must be a slow day on
Re:Sheesh, it's a fork bomb (Score:3, Insightful)
Hope you're not administrating any multi-user Linux boxes then, since in Linux, the quotas only deal with drive space
Re:In other news... (Score:5, Insightful)
And of course, shell access is so easy to get (Score:5, Insightful)
If a person has enough access to the machine to be able to "forkbomb" it, then there's plenty of other nasty things you could do to it.
Re:Yawn. (Score:1, Insightful)
Interesting that your "solution" is perfectly OK when applied to Linux, but is entirely unacceptable for the SAME problem with Windows.
Wrong attitude. (Score:5, Insightful)
Even on a single user desktop machine, its nice to have limits so shitty software can't take down my entire machine. With limits I can just log in on another terminal and kill the offending program, without limits you get to reboot, and lose any work you were doing.
My God, the hypocracy! (Score:5, Insightful)
Here is a issue that can be done remotely with only a user account.
Are you on drugs? (Score:2, Insightful)
Re:Sheesh, it's a fork bomb (Score:4, Insightful)
Now that its been found in Linux, its a "feature"
Come on, I love Linux but the hypocrocy is a bit much
Re:Sheesh, it's a fork bomb (Score:3, Insightful)
int main() {
while(1)
fork() ;
return 0 ;
}
Compiling and running it would hang the box. You could ping the system, but nothing else would would work.
Ultimately, I would have to switch the box off and on again. And I remember thinking that this was a bug.
A user should be allowed to do whatever he/she wants. But if the system becomes unusable, surely it is a bug.
Isn't it friggin' ironic (Score:5, Insightful)
Maybe, and here's a thought, just maybe, it's wise to take a decent, stable distro and perfect it, instead of taking a distro and submerging it in a state of perpetual flux with constant updates.
Just a thought. I might be biased because it's a Debian-based distro that finally put a working Linux on my laptop. But you know what? Every now and then the bias is there for a reason...
Re:Retarded (Score:2, Insightful)
Yes, and? I don't care about fork bombs, since I don't run them on my PC... being able to run as many processes as I choose on that PC is a feature, not a flaw. I do care about having scumware remotely installed on my PC through security holes in applications and the operating system, which is a flaw, not a feature.
Seriously, if you're letting people log onto your PC and run fork bombs, you have far greater problems than a lack of resource limits in the default install.
Silly exploit (Score:5, Insightful)
Most linux systems are used as desktops, if you use them as a server you don't use the defaults. Now a user being able to crash his own system is nothing new. It ain't nice but as long as it is the user doing it then no problem. Now if this fork could be used to make apache explode and bring down the system THAT would be a boo boo.
Ideally yes the system should not do things that bring it coming crashing down but this is close to blaming a car for allowing me to plow into a wall. Not sure if I want a car/computer telling me what I can and cannot do.
As to how to set the limits on the number of forks. Maybe I got this completly wrong but could it be that this depends entirely on your hardware? Perhaps the latest IBM mainframe can handle a few more then an ancient 386? How the hell is the distro supposed to know what I got?
Security is other people doing stuff on my computer that I don't want and or know about. Me screwing stuff up is my business.
BSD is very solid, this is known. It is also known that BSD has been along long before linux and but has been sucking it exhaust fumes ever since it arrived. For every story about how much more secure BSD is there are a dozen stories about linux actually making a mark on the world. So good. Your BSD survived a forkbomb. But why exactly was the author running a linux desktop then if BSD is so much better?
Another non-story on /. Is the internet going to the way of tv?
Re:Wrong attitude. (Score:3, Insightful)
i) My hardware
ii) How many users I want
iii) What programs / services will be running,
how in the name of crikey are they supposed to determine what a "Reasonable limit" would be?
Re:Not a vulnerability. (Score:3, Insightful)
Eh? Most modern UNIX systems let you put some hard limits on all the collective ways that users can consume resources, including # processes, disks space, real/virtual memory, cpu time, etc. Any administrator who is responsible for a multi-user system should have those set to "reasonable" values, and no individual user (except for the administrator of course) would be able to bring down the system.
What kind of resource are you thinking of that any user can exhaust which would stop the system (through resource exhaustion)? Log file messages?
No, this is completely incorrect. (Score:2, Insightful)
This kind of uninformed and ignorant attitude seems quite common in the linux world now that most users aren't experiences unix admins. It would be a good idea to learn about something before claiming to know how it should be setup.
Re:Sheesh, it's a fork bomb (Score:2, Insightful)
Re:Not your usual vulnerability (Score:3, Insightful)
Re:Wrong attitude. (Score:2, Insightful)
Lots of distros ask wether you are running a server or desktop, is it really so tough to ask if that server is dedicated to one task or many jobs, and set sane defaults based on your answer? Then say if you know you want mysql to be able to use more RAM, you can let the mysql user have a higher limit. But you don't have to worry about any of your services bringing your machine down.
Re:Not a vulnerability. (Score:2, Insightful)
The grandparent post is right. Absolute security is absolutely unusable. You need to trust your users a little so they can do the work they need to do. If they step over the line most want to know how to avoid doing it again.
Re:Sheesh, it's a fork bomb (Score:4, Insightful)
Sorry, but the ability of a mail reader to automagically run as many programs as it likes is a feature, not a bug.
The point being that while this may, in some rare cases, be desirable, it shouldn't be the default setting, but rather something that the adminstrator has to enable for that rare user for which it is deemed both necessary and desirable.
"Able" and "capable of" are not the same thing.
It shouldn't be the responsibility of the admin to turn on every possible security feature, but rather to turn off only those ones he deems gets in the way of the functioning of his system.
It's exactly this lacadasical approach to security that has made Windows the hell hole that it is. It certainly puts money in my pocket trying to fix it all, over and over and over again, but I'd far rather be spending my time and earning my money doing something useful.
Like computing.
KFG
I felt dumber for reading that article. (Score:3, Insightful)
I guess my point is certain limits are by design and its up to operator training and guidance on how to work with those limits to make sure they don't exceed them.
Re:My God, the hypocracy! (Score:3, Insightful)
This is a fork bomb (a DoS technique), not 100% access. With this, all your secret files remain safe.
Re:In other news... (Score:3, Insightful)
He has missed the point... (Score:3, Insightful)
I run a several labs at a university. I don't even bother to lock the linux side of the machines down much past base install. My users have never tried to cause problems. I don't even use quota.
If someone ever does cause a problem, I'll take the lab down (cause a pretty good backlash from their fellow grad students) and fix it.
In the mean time, I like the fact that when someone ask me "how much of X can I use" I say, as much as you need so long as it doesn't cause a problem. I'm never going to get mad if they run a large job, etc that slows the machine down. I can always kill it, and ask them to run it on one of the dedicated computers.
Point is, why limit something that is only an issue if you are working against your users, instead of for them? In 99% of the installs that is the way it is (or should be).
Re:Sheesh, it's a fork bomb (Score:2, Insightful)
Re:Not a vulnerability. (Score:3, Insightful)
That would be caught by the limits on virtual memory usage. As I said, what resource are you thinking of that a decent system administrator couldn't limit to prevent a normal user from exhausting resources?
Re:Retarded (Score:5, Insightful)
Had I not considered local exploits important, I'd have had one nicely hacked box.
Not worth the risk. (Score:5, Insightful)
Right, it's a feature, but the question isn't whether it should ever be allowed, but what the default setting should be. I think the article made a pretty good case that default should be no.
And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).
First, I think a lot of unix people would be shocked to find that's on by default as the writer was. Second, that basically means that anyone who successfully hacks into a user account takes the machine down. That applies for your desktop machine, not just "old-style" unix type servers. Third, you mention the relative scarcity of old style servers these days - they're still more common than a user who needs to run an INFINITE number of programs. Even capping somewhere in the thousands would work, keeping anyone from being hampered in their work.
Basically, this is a case of idea vs. reality. You want the IDEA that you can run as many programs as you want, though you'll never need to. So in REALITY, a sane cap never hurts you. However, a lack of a cap provides very REAL security problems, either from a user or from someone who manages to hack a user account. Again, you really don't want EVERY userland exploit to lead to a kernel takedown, do you?
Re:My God, the hypocracy! (Score:3, Insightful)
Re:Welcome to Linux (Score:2, Insightful)
Re:Retarded (Score:5, Insightful)
Re:Retarded (Score:2, Insightful)
What about a poorly written program run by you with an unintentional bug that ends up causing this? What about a remote exploit which allows abitrary code execution? Do you inspect every line of code in programs that you use to make sure this isn't going to happen?
User land programs should not be able to bing an entire system to it's knees. Come on, when I first started with Linux, Slashdotters made fun of Win98 because a single program could crush the machine.
Re:Wrong attitude. (Score:2, Insightful)
So, if you protect my box from resource exhaustion with a defaulty hard limit, one of two things will happen.
i) If the limit is low, my web server won't function very well, as it won't be allowed to spawn processes.
ii) If the limit is high, y mail server could be killed by a fork bomb before the limits get hit.
Simply put, you're treating process limits as a panacea, when it isn't.
Re:Sheesh, it's a fork bomb (Score:1, Insightful)
Then why not let them run as root?
I thought the idea of User Accounts was to limit what they could do, at least as far as system-files, machine-security, inhibiting other users, etc goes.
No, I don;t consider this a 'bug'. I consider it an "inappropriate default setting".
We're talking DEFAULTS (Score:5, Insightful)
Just how many regular users expect to run 20000 processes at once? (or even 200?) When that happens it's almost always caused by a bug (or malicious activity). Right now, I have 50 user processes running. I'm a power user, but I'd probably never get blocked by a limit of 1000 unless I was doing something really wierd -- and something that weird should come with instructions on modifying the kernel settings.
Yes, it should always remain possible to set up your system so that you can run massive numbers of processes and/or threads, but the default should be to keep numbers to a dull roar in favour of system stability. People whose needs are such that they actually and legitimately want to fork massive numbers of processes are also the kinds of people who wouldn't have a hard time figuring out how to change the kernel settings to allow it.
As such, the default should err on the side of security, but allow a knowledgable user to do whatever the heck he wants.
Thing is, though, that local resource-exhaustion exploits are difficult to set. You want to allow a user felatively free reign -- even to the point of stressing the system, but still allow enough of a reserve so that an admin cam login and shut down a user who'se gone overboard. You also want to set a limit that will be reasonable 5 years down the road when processors are 10 times as fast (and/or 20-way SMP is standard issue)
Something to note here in Linux's favour: Even though the forkbomb brought the system to it's knees it stayed up. Although it might have taken 1/2hour to do, an admin may have been actually able to login and kill the offending process.
Re:Retarded (Score:3, Insightful)
A fork bomb doesn't necessarily have to be due to a purposeful attack. A software bug can easily cause a fork bomb by going into an endless loop launching new processes. Should this take your system completely down?
Admittedly, you probably shouldn't bee seeing such a serious bug in any release software. But what if you're a developer, and have code that is forking in a loop which, due to logic problems, never exits? Should you be forced to reboot your system?
I can think of a lot of such instances where a runaway process might start forking -- and personally, I'd prefer to be able to kill the process instead of being forced to reboot. I doubt if I'm ever going to purposefully spawn 5000 simultaneous processes. I think you'd have a valid complaint if the OS was limiting you to 50 processes, but there is a realistic upper limit of the number of processes a given hardware configuration is going to be able to reliably handle -- why shouldn't the kernel prevent the system from surpassing such a limit? Do you really want to be able to open enough processes to kill your system so badly the only way out is a reboot?
Yaz.
Re:In other news... (Score:2, Insightful)
YOU may not run them... (Score:4, Insightful)
Look, you seriously misunderstand something here. Run a server long enough and it gets very likely that even with the latest patches, you will get attacked. If someone breaks into your box, exactly how much power do you want them to have?
The ability to bring the machine to a screeching halt with an attack that dates back to the Land Before Time is not a feature! It is a security hole and it's every bit as important to fix as your exterally visible holes.
Because, one of these days some cracker is going to get the drop on your box. You'd better hope your box is ready for that.
No, you are treating it as a panacea. (Score:5, Insightful)
If you are running a server than needs hundreds of apache processes running, then you know that and can raise it. Someone who is new to linux won't need that, and won't know how to setup limits for themselves. So you make the machine secure by default, and allowed advanced users with advanced needs to tweak things as they need.
The best thing I can think of to illustrate the point to you is your apache example. By default apache won't let you have more than 150 users connected. This is a sane default to protect from resource exhaustion. If you need more than that, you can set it yourself. People have some protection by default, but advanced users can customize the settings for their needs.
I cannot believe in 2005 I am arguing with someone who thinks secure by default is a bad idea because it might invonvenience you.
Re:No, you are treating it as a panacea. (Score:2, Insightful)
Re:Sheesh, it's a fork bomb (Score:2, Insightful)
"Fired."
I don't even trust my own mother on her own network.She loves me for it. Her system stays up.
KFG
Re:"Secure By Default"? (Score:2, Insightful)
-5 Advocacy (Score:3, Insightful)
Please, just stop posting this cruft as "news for nerds,"; it's not news and any self-respecting nerd knows bad advocacy when he/she sees it. I like BSD. I was a major BSD guy back in the day, from BSD 4.2 to Ultrix up to SunOS 4.x. BSD is great.
Linux is also quite nice.
They both have a ton of great features and a ton of other annoying features / arguably bugs. Linux has more features and more bugs because more people contribute to it, but that's both a blessing and a curse for the BSDs.
Once agian, carry on. Nothing to see here.
Problem with defaults, not kernel (Score:3, Insightful)
This is a case of bad defaults, not a kernel problem. I recommend a max physical memory of no more than 1/4 physical memory, preferrably less.
AIX also has cruddy defaults, but ulimit -m limits physical RAM, not virtual RAM. That way, a single process with run-away memory use will just start swapping like crazy and let the rest of the system keep running. Of course, even then a dozen or so such processes will still bring the system to a crawl. I would like to see physical RAM limited by user id.
Re:Yawn. (Score:4, Insightful)
The author is not "Running around screaming...", he is simply very surprised that a local user can exhaust a system so easily. Maybe every single admin should think of all n possible security problems every single time they take a box live, but people are human.
Which one is worse: limits in place by default so that an admin needs to know how to raise them when necessary and the forkbomb would not work, or no limits in place and having to know to set them or else the box can be brought to its knees? Secure by default or free for all.
I suppose you think every default install should also have telnetd enabled by default because any admin with half a brain should know how to turn that off? Point is admins are fallible, the default should be the lower total risk/cost option. I think which one that is is clear here.
Re:Thank god I use Windows (Score:1, Insightful)
Re:Silly exploit (Score:3, Insightful)
man 2 setrlimit It's part of POSIX. It should work the same on any *nix on any hardware.
"BSD is very solid, this is known. It is also known that BSD has been along long before linux and but has been sucking it exhaust fumes ever since it arrived. For every story about how much more secure BSD is there are a dozen stories about linux actually making a mark on the world. So good. Your BSD survived a forkbomb. But why exactly was the author running a linux desktop then if BSD is so much better?"
You're clearly unaware of the various marks BSD has made. Essentially every OS out there today runs BSD code, including Linux and even Windows, and the BSDs continue to break new ground in ways that are frankly too numerous to go over here.
Also, most of the "difference" Linux has made is actually software that will run on BSD as well.
Re:Wrong attitude. (Score:2, Insightful)
Limits should be set substantially higher than what a normal user will ever need. A normal user should never even bump into the limits.
Now, if you ARE a power user, then you should know how to change those limits. This gives you the best of all possible worlds. You get protection, but you can CHANGE THE DEFAULTS if you need to! Simple. If you are setting up apache, you should know enough about it to set reasonable limits.
Let me put it another way. What would you think if I said "ZoneAlarm doesn't know what ports I need open. It should open up ALL ports by default. I will shut down the ones that I don't need." That would be a pretty useless firewall.
history (Score:1, Insightful)
But what those of us who have been around for some time now are amazed by, is we had this exact same discussion back pre-1990, and we thought it had been settled then!
Not only have people not learned since then, we are going in retro-grade cycles, and revisiting the f**ing stuff we thought had been fixed.
What do I know? I only run almost no desktops, only servers, and I want limits on by default; and I want back all the the lost time in my life that I've spent turning crap off and tightening up after I've installed.
Every vendor Unix sucks, and Linuxes are now vendor Unixes, and they all suck for the same old old old reasons. And it appears every generation we have to fight the same battles all over again, against the same morons. At least the BSDs have clue, and actually show improvement over time.
Re:Speaking of insecure.... (Score:1, Insightful)
I mean, really, "Linux lets me shoot myself in the foot". Good to know. DON'T SHOOT YOURSELF IN THE FOOT.
Oh, but wait.... (Score:2, Insightful)
EXACTLY why it'll be a very long time before users will be comfortable.
But wait....Linux is more secure than Windows....riiiiiiight..../sarcasm off
Re:Sheesh, it's a fork bomb (Score:5, Insightful)
Re:Yawn. (Score:3, Insightful)
Nobody in their right mind should be defending a lack of stability in Linux. Those who want to push the box to its absolute limits (and risk a hard lockup) can find out how to activate the options that allow this. It should NEVER be the default to have options active that compromise the stability of a machine so easily. Desktops are just as important as servers in this regard, as far as I'm concerned.
Linux is supposed to be better than Windows, remember? Raising a valid objection to a default setting capable of causing (or allowing) real-world problems on everyday computer systems is not "running around screaming".
Oh, and by the way, not everyone on Earth is a trained Linux system administrator, and being a Linux admin isn't a prerequisite to being in one's right mind. There are plenty of perfectly intelligent people who have no idea how to "lock down their box" and never will, just like most people have no idea how to break down a carburator or build a nuclear reactor. Some people have other things to do. If Linux distros want to market to the general public, they should be safe to use by the general public. We're talking about DEFAULTS here that are being used by every person running Linux in general. Not just servers run by Linux gurus. Your statement comes across like saying people who aren't expert car mechanics shouldn't be allowed on public roadways. In other words, a little elitist, don't you think?
Install media + physical access + clue = ownership (Score:3, Insightful)
Technology is no substitute for physical security, as Kevin Kline's character in "Fish Called Wanda" showed by getting a gun past a metal detector in the common airport security situation of watching the luggage and not the person.
If you can boot a machine with your own media or pull it apart and no-one notices, then you can have full control - whether it is win2k or whatever.
As for network security, virtual machines with quotas have to be the way to go so long as there are loonies with root passwords that enable telnet, install a compiler and use "coffee" as a password.