Some Linux Distros Found Vulnerable By Default 541
TuringTest writes "Security Focus carries an article about a security compromise found on several major distros due to bad default settings in the Linux kernel. 'It's a sad day when an ancient fork bomb attack can still take down most of the latest Linux distributions', says the writer. The attack was performed by spawning lots of processes from a normal user shell. Is interesting to note that Debian was not among the distros that fell to the attack. The writer also praises the OpenBSD policy of Secure by Default."
Fork vulnerability (Score:5, Funny)
Thank god I use Windows (Score:5, Funny)
Re:Thank god I use Windows (Score:5, Funny)
Re:Thank god I use Windows (Score:5, Funny)
Re:Thank god I use Windows (Score:5, Funny)
Possible obvious responses:
Only 3 trojans? I'm a self-replicating-trojan author you insensitive clod.
So I can only run three instances of Internet Explorer at once?
Customer: Whenever I try to start a second program, it gives me an error...
Techie: Yeah, you can't run Gator, Precision Time, Weatherbug AND something else... you've gotta turn something off.
Customer: (incredulous)WHAT!!?? I NEED TO KNOW WHAT TIME IT IS, SAVE MY PASSWORDS, AND KNOW WHAT THE WEATHER IS LIKE OUTSIDE.
Techie: (mutes customer): "Fucking Chuck Noris, all those goddamn ninjas had to go after the pirates."
Re:Thank god I use Windows (Score:3, Interesting)
cmd
cmd
cmd
cmd
cmd
cmd
cmd
cmd
cmd
cm
cmd
kill.bat
kill.bat
kill.bat
I can drop my ath 2000 512mb system in say oh 10 seconds. Increase the amount of cmd's and kill.bat's for added effect
Re:Thank god I use Windows (Score:5, Informative)
Re:Thank god I use Windows (Score:3, Interesting)
start cmd
start cmd
start cmd
start cmd
start cmd
start cmd
start cmd
start cmd
start cmd
start kill.bat
start kill.bat
start kill.bat
start kill.bat
start kill.bat
start kill.bat
Try it! My machine has been freezed in about 15 seconds.
Re:Thank god I use Windows (Score:4, Informative)
That won't work, because "cmd" runs the new process and then waits for it to complete. So you'll wind up with new CMDs every time you type "EXIT" but that's about it.
You want something like:
CMD /K KILL.BAT
KILL.BAT
Which, on Windows XP at least, also didn't work. I've got it running in the background right now, so if you see this comment, it failed to bring my system down.
Sheesh, it's a fork bomb (Score:4, Insightful)
And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).
Re:Sheesh, it's a fork bomb (Score:3, Insightful)
Hope you're not administrating any multi-user Linux boxes then, since in Linux, the quotas only deal with drive space
Re:Sheesh, it's a fork bomb (Score:4, Informative)
Re:Sheesh, it's a fork bomb (Score:3, Interesting)
Wrong attitude. (Score:5, Insightful)
Even on a single user desktop machine, its nice to have limits so shitty software can't take down my entire machine. With limits I can just log in on another terminal and kill the offending program, without limits you get to reboot, and lose any work you were doing.
Re:Wrong attitude. (Score:3, Insightful)
i) My hardware
ii) How many users I want
iii) What programs / services will be running,
how in the name of crikey are they supposed to determine what a "Reasonable limit" would be?
No, you are treating it as a panacea. (Score:5, Insightful)
If you are running a server than needs hundreds of apache processes running, then you know that and can raise it. Someone who is new to linux won't need that, and won't know how to setup limits for themselves. So you make the machine secure by default, and allowed advanced users with advanced needs to tweak things as they need.
The best thing I can think of to illustrate the point to you is your apache example. By default apache won't let you have more than 150 users connected. This is a sane default to protect from resource exhaustion. If you need more than that, you can set it yourself. People have some protection by default, but advanced users can customize the settings for their needs.
I cannot believe in 2005 I am arguing with someone who thinks secure by default is a bad idea because it might invonvenience you.
Re:Wrong attitude. (Score:3, Informative)
Re:Sheesh, it's a fork bomb (Score:4, Insightful)
Now that its been found in Linux, its a "feature"
Come on, I love Linux but the hypocrocy is a bit much
Re:Sheesh, it's a fork bomb (Score:5, Insightful)
Re:Sheesh, it's a fork bomb (Score:3, Insightful)
int main() {
while(1)
fork() ;
return 0 ;
}
Compiling and running it would hang the box. You could ping the system, but nothing else would would work.
Ultimately, I would have to switch the box off and on again. And I remember thinking that this was a bug.
A user should be allowed to do whatever he/she wants. But if the system becomes unusable, surely it is a bug.
Re:Sheesh, it's a fork bomb (Score:3, Funny)
while(1)
fork()
return 0
}
On a modern Unix/Unix-like system, you often have Perl. Save yourself the effort of compiling:
perl -e 'while(1){fork()}'
One thing I always liked to do was run this for about 1 minute, hit Ctrl-C, and see how long until the kernel finally manages to reap all the child processes and the system returns back to normal. Usually can take anywhere from 30 seconds to a couple of minutes before the system becomes responsive again.
(It *is* a great way to get impressive lo
Re:Sheesh, it's a fork bomb (Score:3, Interesting)
Fail - The OS grinds to a halt and after waiting five minutes, there is no recourse other than cutting power and rebooting.
Pass (one/a) - The OS grinds, but the user can navigate to a feature (console, Control-ESC in KDE, etc) that allows the user to quickly kill the offending process(es).
Pass (one/b) - The OS grinds or not, but regardless automatically kills the offending process. I consider this to be a worse
Re:Sheesh, it's a fork bomb (Score:4, Insightful)
Sorry, but the ability of a mail reader to automagically run as many programs as it likes is a feature, not a bug.
The point being that while this may, in some rare cases, be desirable, it shouldn't be the default setting, but rather something that the adminstrator has to enable for that rare user for which it is deemed both necessary and desirable.
"Able" and "capable of" are not the same thing.
It shouldn't be the responsibility of the admin to turn on every possible security feature, but rather to turn off only those ones he deems gets in the way of the functioning of his system.
It's exactly this lacadasical approach to security that has made Windows the hell hole that it is. It certainly puts money in my pocket trying to fix it all, over and over and over again, but I'd far rather be spending my time and earning my money doing something useful.
Like computing.
KFG
Re:Sheesh, it's a fork bomb (Score:5, Interesting)
*shuffles nervously* So, out of curiousity, for my.. . err... desktop... how do I stop this exactly?
Re:Sheesh, it's a fork bomb (Score:5, Informative)
Specifically ulimit -H -u <number> in their startup file.
Re:Sheesh, it's a fork bomb (Score:3, Interesting)
How? I thought by default that you have to be root to change the ulimit (either up or down)
I tried the :(){:|:&};: test on Slackware 10, and it failed - load rose for about 30 seconds, then went back to normal - th
Re:Sheesh, it's a fork bomb (Score:3, Funny)
Or you can switch back to MS-DOS. Just one process; ultimate security!
Not worth the risk. (Score:5, Insightful)
Right, it's a feature, but the question isn't whether it should ever be allowed, but what the default setting should be. I think the article made a pretty good case that default should be no.
And if you are administrating a true multi-user old-style-Unix type server, you should know enough to stop people fork bombing you (i.e. quotas).
First, I think a lot of unix people would be shocked to find that's on by default as the writer was. Second, that basically means that anyone who successfully hacks into a user account takes the machine down. That applies for your desktop machine, not just "old-style" unix type servers. Third, you mention the relative scarcity of old style servers these days - they're still more common than a user who needs to run an INFINITE number of programs. Even capping somewhere in the thousands would work, keeping anyone from being hampered in their work.
Basically, this is a case of idea vs. reality. You want the IDEA that you can run as many programs as you want, though you'll never need to. So in REALITY, a sane cap never hurts you. However, a lack of a cap provides very REAL security problems, either from a user or from someone who manages to hack a user account. Again, you really don't want EVERY userland exploit to lead to a kernel takedown, do you?
We're talking DEFAULTS (Score:5, Insightful)
Just how many regular users expect to run 20000 processes at once? (or even 200?) When that happens it's almost always caused by a bug (or malicious activity). Right now, I have 50 user processes running. I'm a power user, but I'd probably never get blocked by a limit of 1000 unless I was doing something really wierd -- and something that weird should come with instructions on modifying the kernel settings.
Yes, it should always remain possible to set up your system so that you can run massive numbers of processes and/or threads, but the default should be to keep numbers to a dull roar in favour of system stability. People whose needs are such that they actually and legitimately want to fork massive numbers of processes are also the kinds of people who wouldn't have a hard time figuring out how to change the kernel settings to allow it.
As such, the default should err on the side of security, but allow a knowledgable user to do whatever the heck he wants.
Thing is, though, that local resource-exhaustion exploits are difficult to set. You want to allow a user felatively free reign -- even to the point of stressing the system, but still allow enough of a reserve so that an admin cam login and shut down a user who'se gone overboard. You also want to set a limit that will be reasonable 5 years down the road when processors are 10 times as fast (and/or 20-way SMP is standard issue)
Something to note here in Linux's favour: Even though the forkbomb brought the system to it's knees it stayed up. Although it might have taken 1/2hour to do, an admin may have been actually able to login and kill the offending process.
Grep Bomb (Score:5, Interesting)
So what would a good limit to the number of processes spawned be?
I mean what can say what is good for everyone?
Saying that if you think the fork bomb is good grep bombs are more fun and particularly good for silincing the mass of Quake 3 players in an undergraduate lab:
'grep foo
Oh hang on did i just discover a new exploit
Re:Grep Bomb (try it in freebsd) (Score:5, Informative)
So, try this in FreeBSD, and be amazed, now try it in any 2.4 or 2.6 linux kernel, and be disgusted.
Problem with defaults, not kernel (Score:3, Insightful)
The default for memory is unlimited, which does indeed create a DOS "attack" for "grep bomb" and other inadvertant application bugs.
This is a case of bad defaults, not a kernel problem. I recommend a max physical memory of no more than 1/4 ph
Re:Grep Bomb (try it in freebsd) (Score:3, Interesting)
grep:
I don't think they're killed automatically. They seem to be running out of memory.
Of course, the same thing on my OpenBSD box doesn't run out of memory... Either a the GNU grep has a memory leak or the BSD grep has a check for something (long lines?).
Re:Grep Bomb (Score:3, Informative)
This may be a Linux only issue. FreeBSDs grep exits almost immediately, but Suse certainly spins.
I find this interesting because BSD's grep claims to be GNU Grep version 2.5.1 which is the same version as on my Suse installation.
Perhaps it's a difference in the way
Yawn. (Score:3, Insightful)
Running around screaming "FORKBOMB! FORKBOMB! The sky's falling in!" seems to be a common pattern every few years. If you know what you're doing, it's trivial to prevent and if you don't know what you're doing, why are you running a public box?
Re:Yawn. (Score:4, Insightful)
The author is not "Running around screaming...", he is simply very surprised that a local user can exhaust a system so easily. Maybe every single admin should think of all n possible security problems every single time they take a box live, but people are human.
Which one is worse: limits in place by default so that an admin needs to know how to raise them when necessary and the forkbomb would not work, or no limits in place and having to know to set them or else the box can be brought to its knees? Secure by default or free for all.
I suppose you think every default install should also have telnetd enabled by default because any admin with half a brain should know how to turn that off? Point is admins are fallible, the default should be the lower total risk/cost option. I think which one that is is clear here.
Re:Yawn. (Score:3, Insightful)
Not your usual vulnerability (Score:5, Informative)
Re:Not your usual vulnerability (Score:2)
However, I could see some poorly writen loops and the first experiments with fork bringing down the house. But none of us would do that would we....
Re:Not your usual vulnerability (Score:3, Insightful)
Retarded (Score:4, Interesting)
Sure, maybe if you're running a server that allows remote logins, you want to restrict how many processes a user can run. But as a single-user system, I want to be able to run as many processes as I choose, not be restricted by the distribution author's ideas of what's good for me.
Re:Retarded (Score:5, Informative)
Re:Retarded (Score:2, Insightful)
Yes, and? I don't care about fork bombs, since I don't run them on my PC... being able to run as many processes as I choose on that PC is a feature, not a flaw. I do care about having scumware remotely installed on my PC through security holes in applications and the operating system, which is a flaw, not a feature.
Seriously, if you're letting people log onto your PC and run fork bombs, you have far greater problems than a lack of r
Re:Retarded (Score:5, Insightful)
Re:Retarded (Score:3, Insightful)
A fork bomb doesn't necessarily have to be due to a purposeful attack. A software bug can easily cause a fork bomb by going into an endless loop launching new processes. Should this take your system completely down?
Admittedly, you probably shouldn't bee seeing such a serious bug in any release software. But what if you're a developer, and have
YOU may not run them... (Score:4, Insightful)
Look, you seriously misunderstand something here. Run a server long enough and it gets very likely that even with the latest patches, you will get attacked. If someone breaks into your box, exactly how much power do you want them to have?
The ability to bring the machine to a screeching halt with an attack that dates back to the Land Before Time is not a feature! It is a security hole and it's every bit as important to fix as your exterally visible holes.
Because, one of these days some cracker is going to get the drop on your box. You'd better hope your box is ready for that.
-5 Advocacy (Score:3, Insightful)
Re:Retarded (Score:2)
Re:Retarded (Score:5, Insightful)
Had I not considered local exploits important, I'd have had one nicely hacked box.
Debian not vulnerable? (Score:5, Interesting)
In case anyone is interested, here's the obfuscated fork bomb:
Running bash then :p (Score:4, Informative)
I recognise that one... which is always good
just don't leave your box unlocked and have some "funny" person drop it in your
Re:Debian not vulnerable? (Score:3, Informative)
And yes, my Debian box also fell to that the first time I ran it.
I put that on the wall of the CS computer lab here for fun, I don't know how many poor souls ran it.
Re:Debian not vulnerable? (Score:3, Funny)
mark@stewie:~$ w
11:11:04 up 216 days, 19:50, 2 users, load average: 258.41, 767.84, 339.94
Not a vulnerability. (Score:5, Insightful)
It must be a slow day on
Re:Not a vulnerability. (Score:3, Insightful)
Eh? Most modern UNIX systems let you put some hard limits on all the collective ways that users can consume resources, including # processes, disks space, real/virtual memory, cpu time, etc. Any administrator who is responsible for a multi-user system should have those set to "reasonable" values, and no individual user (except for the administrator of course) would be abl
Re:Not a vulnerability. (Score:3, Insightful)
That would be caught by the limits on virtual memory usage. As I said, what resource are you thinking of that a decent system administrator couldn't limit to prevent a normal user from exhausting resources?
Re:Not a vulnerability. (Score:3, Interesting)
And of course, shell access is so easy to get (Score:5, Insightful)
If a person has enough access to the machine to be able to "forkbomb" it, then there's plenty of other nasty things you could do to it.
"Secure By Default"? (Score:3, Interesting)
I'm all for making special install kernels and distros "out of the box" to be as hardened as possible. I would love see many distros do a "paranoid" configuration. There are plenty of things OpenBSD does right but that does not excuse OpenBSD. Just like Linux and every other operating system out there, they can still strive to do better.
Are you on drugs? (Score:2, Insightful)
Re:"Secure By Default"? (Score:5, Interesting)
Taking away the ftpd binary wouldn't stop the user from doing exactly the same thing by some other means. For example, they could simply download the source and compile a new one by themselves. Or use Perl. Or compile a binary somewhere else and download that.
New Plug Vulnerability found! (Score:5, Funny)
And in other news... (Score:3, Funny)
So what? Publish the vunerabilities, patch them, move on. Sheesh..
My God, the hypocracy! (Score:5, Insightful)
Here is a issue that can be done remotely with only a user account.
Re:My God, the hypocracy! (Score:3, Insightful)
This is a fork bomb (a DoS technique), not 100% access. With this, all your secret files remain safe.
Re:My God, the hypocracy! (Score:3, Insightful)
Install media + physical access + clue = ownership (Score:3, Insightful)
You can even get into a headless solaris box with an install CD and a serial terminal. You can then blank the root password and boot the system normally - you then have full access. Any machine that people have full physical access to is vunerable to those people - the most locked down box is still vunerable to someone pulling out the drive with its root partition, mounting it, and editing password files.
Technology is no substitute for physical sec
Reminds me of DoS: Pingfork! (Score:5, Interesting)
In pseudocode:
while (true) {
ping(target)
fork()
}
I seriously thought of posting this to a few script kiddie sites, so the kiddies could crash themselves long before the pinging does any damage
--buddy
Re:Reminds me of DoS: Pingfork! (Score:5, Funny)
OMFG, not a user shell! (Score:2)
Yes, it may be sad to find - but honestly people, local shell exploits exist 'out of the box' - period. It's *pretty much* unavoidable even after proper sandboxes and restrictions have been configured.
And, as a Debian user - I am both insulted and disgusted that it was arbitrarily singled out, I assume this was because of i
another way to bring a system to it's knees (Score:3, Informative)
Re:another way to bring a system to it's knees (Score:5, Informative)
That won't work on modern systems, or systems with a lot of virtual memory available (lots of RAM or large swap).
A modern OS will not actually commit memory until it is actually used, and while malloc() involves some bookkeeping, most of the bookkeeping is very little. It's quite likely you'll actually run out of process RAM (2GB or 3GB, depending on settings on a 32 bit machine) space first before the system starts to strain. On Linux, the recent kernels will kill processes that start hogging RAM when free memory falls below the low-water mark. And each malloc() really allocates 8/16/32 bytes of RAM for even a 1 byte allocation.
Isn't it friggin' ironic (Score:5, Insightful)
Maybe, and here's a thought, just maybe, it's wise to take a decent, stable distro and perfect it, instead of taking a distro and submerging it in a state of perpetual flux with constant updates.
Just a thought. I might be biased because it's a Debian-based distro that finally put a working Linux on my laptop. But you know what? Every now and then the bias is there for a reason...
Silly exploit (Score:5, Insightful)
Most linux systems are used as desktops, if you use them as a server you don't use the defaults. Now a user being able to crash his own system is nothing new. It ain't nice but as long as it is the user doing it then no problem. Now if this fork could be used to make apache explode and bring down the system THAT would be a boo boo.
Ideally yes the system should not do things that bring it coming crashing down but this is close to blaming a car for allowing me to plow into a wall. Not sure if I want a car/computer telling me what I can and cannot do.
As to how to set the limits on the number of forks. Maybe I got this completly wrong but could it be that this depends entirely on your hardware? Perhaps the latest IBM mainframe can handle a few more then an ancient 386? How the hell is the distro supposed to know what I got?
Security is other people doing stuff on my computer that I don't want and or know about. Me screwing stuff up is my business.
BSD is very solid, this is known. It is also known that BSD has been along long before linux and but has been sucking it exhaust fumes ever since it arrived. For every story about how much more secure BSD is there are a dozen stories about linux actually making a mark on the world. So good. Your BSD survived a forkbomb. But why exactly was the author running a linux desktop then if BSD is so much better?
Another non-story on /. Is the internet going to the way of tv?
Re:Silly exploit (Score:3, Insightful)
man 2 setrlimit
It's part of POSIX. It should work th
Default kenerl in Gentoo? (Score:5, Informative)
Choose your kernel version, patch set, etc. No defaults. I guess he has never actually installed gentoo himself. The author should get a clue about the distro's he's talking about before making clames about their security.
I felt dumber for reading that article. (Score:3, Insightful)
I guess my point is certain limits are by design and its up to operator training and guidance on how to work with those limits to make sure they don't exceed them.
He has missed the point... (Score:3, Insightful)
I run a several labs at a university. I don't even bother to lock the linux side of the machines down much past base install. My users have never tried to cause problems. I don't even use quota.
If someone ever does cause a problem, I'll take the lab down (cause a pretty good backlash from their fellow grad students) and fix it.
In the mean time, I like the fact that when someone ask me "how much of X can I use" I say, as much as you need so long as it doesn't cause a problem. I'm never going to get mad if they run a large job, etc that slows the machine down. I can always kill it, and ask them to run it on one of the dedicated computers.
Point is, why limit something that is only an issue if you are working against your users, instead of for them? In 99% of the installs that is the way it is (or should be).
I remember forkbombing myself (Score:5, Interesting)
while [1] do
fubar &
done
Then I did chmod +x fubar, and typed "fubar &"
oops.
The system load started climbing and everyone else on the machine started bitching. I thought it would crash, and went over to the local admin and fessed up. Of course, we were all interested in what would happen. Nobody could get in to kill it, and the processes were spawning so fast that we couldn't catch it. It was taking forever just to log int. But the load leveled off, and it wasn't going to crash. The admin was going to reboot it, but then I said "wait a second!" I went back to my window that was open. Know what I typed?
rm -f fubar
I suppose you could make it more nasty by making the file name create a copy of itself and name it the process id, so that you wouldn't be able to rename it.
cp $0 .$$
./.$$ &
This will leave all the process files laying around, but you could code something to remove them. But this gets the point across.
Debian Sarge is vunerable (Score:4, Informative)
(forkbomb.sh)
#!/bin/bash
./forkbomb.sh
while true; do
done
Linux ignored a lot of rlimits until recently? (Score:4, Interesting)
I found some earlier kernels ignored RLIMIT_CPU, RLIMIT_RSS, and RLIMIT_NPROC, and setting the CPU and RSS limits in Apache was ineffective. This was in the Red Hat 9 / 2.4.20 kernel days. I have not researched this in a year or so. If all this stuff works now, let me know so I can insert a "ulimit -Hu 10" in future startup scripts as a "courtesy" to inattentive programmers.
quick perl fork (Score:4, Informative)
perl -e 'while(1){fork();}'
Course we were running VMware, initially with their very insecure RedHat 5.2 I think it was..
Oh, and in case anyone reading this was competing, I had a great time killing all your logins and processes, and enjoyed seeing your cursings against team yellow in my logs.. but the perl thing, along with a very small team, took us out completely..
OMG, rm -rf / still works as root too!! (Score:3, Funny)
Why is Linux still vulnerable by default?
Re:In other news... (Score:2, Insightful)
Re:In other news... (Score:2)
Re:In other news... (Score:5, Insightful)
Re:In other news... (Score:4, Informative)
Try ":(){
Re:In other news... (Score:4, Informative)
Re:In other news... (Score:3, Interesting)
Is there a ulimit equivalent for Win32?
Just as an after thought, why don't people run public Windows servers and allow people to login, like Unix shell servers?
Speaking of insecure.... (Score:5, Funny)
Re:In other news... (Score:3, Insightful)
Re:In other news... (Score:5, Informative)
If you had bothered to read the thread the article points to, the forkbomb vulnerability wasn't in the kernel per se, but in the /etc/security/limits file, which on most distros has a bunch of example lines commented out by default.
The kernel can't/shouldn't implement limits that are commented out.
Edit the file(s) to your taste and reboot.
No kernel patching necessary.
Re:How long? (Score:3, Funny)
The study is invalid!!!
Re:How long? (Score:3, Funny)
5 minutes.
Re:Wanna make a bet? (Score:2)
And your proof for this (proof is something you may learn about when you do go away to college) is exactly what? If you are going to take a shot at "comercial" (half decent spelling is another thing that you tend to start using in college) distributions, how about being specific rat
Re:Big Fuss (Score:3, Interesting)
Re:So- (Score:3, Informative)
Re:Thread, not process! (Score:3, Informative)
No, on Unix fork() creates a new process that, usually temporarily, shares a (usually copy-on-write) memory space, file descriptors and other things with it's parent. Threads are created via the pthread_create() call (or thr_create() on Solaris).
Now underneath, some popular OSes implement threads as full processes which happen to share a page table and system resources with their parent process, but you still don't create them with fork().
Re:Run this: (Score:3, Interesting)
3 seconds later, and it's over with the last two bashes stalled, waiting for ctrl-c. The same thing happens under SFU, but it displays less Permission denied messages. The rest of the machine didn't even notice.
That's what job limits [microsoft.com] are for. Shameless plug: get jobprc, a GPL'd command-line job creator (written by me) here [comcast.net].
Re:vulnerable because of ssh, not like cmd (Score:3, Interesting)