Is the Unix Community Worried About Worms? 516
jaliathus asks: "While the Microsoft side of the computer world works overtime these days to fight worms, virii and other popular afflictions of NT, we in the Linux camp shouldn't be resting *too* much. After all, the concept of a worm similar to Code Red or Nimda could just as easily strike Linux ... it's as easy as finding a known hole and writing a program that exploits it, scans for more hosts and repeats. The only thing stopping it these days is Linux's smaller marketshare. (Worm propagation is one of those n squared problems). Especially if our goals of taking over the computing world are realized, Linux can and will be a prime target for the worm writers. What are we doing about it? Of course, admins should always keep up on the latest patches, but can we do anything about worms in the abstract sense?" Dispite the difficulties in starting a worm on a Unix clone, such a feat is still within the realm of possibility. Are there things that the Unix camp can be learning from Code Red and Nimbda?
Linux has plenty of marketshare (Score:2, Informative)
What smaller marketshare? Check out the Netcraft [netcraft.com] survey if you don't believe me. I think better programming is the reason we aren't seeing any worms targetted at linux web servers.
Linux has plenty of resistance (Score:2)
I sincerely doubt we'd seem a very infectious worm like NIMDA even if Linux were a very common OS. A NIMDA style worm that propates via email clients and web servers faces a bigger uphill battle in the Linux world than in the IIS world. For starters, there are way more semi-incompatible Linux distributions floating around - it wouldn't be uncommon to find a RH 6.x server would it? There's more variation in web servers, too: Apache, WN, thttpd and others all have a presence. That means that the web server vector has barriers to propagation, one buffer overflow won't cause every web server to become a propagation vector. One IIS buffer overflow cause the Code Red worm. There is more hardware variation: Linux runs on x86, SPARC, Mips and Alpha CPUs. Shellcode to run on all 4 architectures would be difficult if not impossible. There are *vastly* more email clients in common use in the Linux world than in the Windows world: mailx, pine, elm, mutt, Netscape Communicator, balsa (?), etc etc. These various email clients don't share a common scripting language, address book, or even a common format for saved mail. Most if not all of them don't "launch" executable attachments. This would lend resistance to the Linux population.
In short, the monoculture of MSFT products (IIS, Outlook, Win32 and x86) is probably at fault for the Code Red, SirCam and NIMDA problem, not mere popularity.
Re:Linux has plenty of marketshare (Score:2)
http://www.netcraft.com/Survey/index-200007.htm
Linux has the slightly higher market share.
Learning from Code Red? (Score:4, Insightful)
THAT was the worm to learn from, not Code Red!
by Robert Morris (Score:2, Informative)
Re:Learning from Code Red? (Score:3, Informative)
1. People using *NIX systems are usually administering servers, or just love computers. The end result is that they're better (not nessicarily great) at keeping their machines patched.
2. People using NT/2000 often don't even realize they have exposed ports. The worst of the Code Red/Nimda infections are coming from machines on Cable/DSL...home users who probably don't even know their machine is a server.
3. Maturity. Any given piece of software will mature in features and stability/security. Most often, growth in security is sacrificed for features in commercial software. When software is free there tends to be less people trying to add marketing based features to a product. Most features come as modules which you must chose to install. With the focus on security, the number of vulnerabilities shrinks until there are virtually none.
4. Development environment. This may not be immediately obvious as a cause, but it is very relevant. IIS is written in C++, and many people think that C++ is better than C. The real truth is that while C++ provides many benefits, it also can make auditing code more difficlt. The language contains so many features that it becomes very difficult to trace a path of execution just by looking at some code.
I am sad to admit that every day I write code in C++, using MFC. My conclusion is that development is more difficult on Windows in C++ than on any other platform/language I have used. M$ has an idea of how an application should be laid out that very rarely fits my idea of how an application should be laid out.
Compare Apache with IIS. Apache has been around for quite some time now, it aims to be a decent general use webserver with a useful set of features. Things such as dynamic content and indexing are provided by various modules which communicate through a well-defined API. It's written in nice, linear easy-to-read C.
IIS has been around for a while, but the push is on features and integration with Windows. IIS integrates into many aspects of Windows, and it uses COM for it's extensions. Because all COM objects are handled at an OS level, there is much potential for a bad module to blow up the system.
Of course, even the holes in M$ software have patches available long before they become a headline for the day.
Tarpit - May not work next time. (Score:2)
maybe that should be a standard service? add the ports exploited to tarpit.rc ..
of course that wouldn't solve much but it would be something to start with.
You are right - next time, the worm author might do something different just to make sure LaBrea isn't nearly as effective. For instance, by keeping track of how long it's taken to do it's job, the worm may just abort the thread if it takes, say, 20 seconds to send over part one of the exploit. LaBrea becomes a small slowdown then.
There's not a 'real' answer to stoping worms and the sort, except for administrator vigilance. No matter what OS you use as a server platform (or a mix of things, like my network), ya gotta be quick with the patches and vigilant with security.
As for reversing attacks, etc - there's some severe problems there. You are attacking someone else's hardware - even if the script kiddie may be controling it, they may be on someone else's machine doing it remotely. screw up that person's box, and you might have a problem. (Of course, there's other ethical issues here - I'd really like to just view it all as 'self defense' when you throw an attack back at an attacker online. Unluckly, there's no real presidence for that, I'm not sure there should be!)
something to remember (Score:2, Informative)
Re:something to remember (Score:3, Interesting)
You should know(remember?) that the first worm ever written infected many *NIX systems
The First worm ever written?
Well, let me see, the term worm was invented by John Brunner, in his classic book, "Shockwave Rider"
And the guys at Xerox Parc wrote some network based programs... which they called worms after the John Brunner usage.
And WAY later, Robert Tappan Morris Jr. wrote the Internet worm.
So, No. The first worms didn't run on Unix
Incidentally, at least one of the xerox worms got out of hand and crashed a lot of machines at PARC.
Z.
Re:something to remember (Score:2)
http://www.software.com.pl/newarchive/misc/Worm/d
Pretty well written article with more detail than the above post (same info though).
Holding back the worm (Score:4, Funny)
That, and the fact that MOST *nix users/admins tend to be a bunch of computer dorks, like us, and will be sure to stay up to date on security concerns, or at the very least, clean their system of the worm in a timely fashion.
Re:Holding back the worm (Score:2)
Monoculture (Score:3, Insightful)
Also, it's my experience that (for now) people who set up Linux to run on the net are a little bit more clueful than NT administrators. NT seems to encourage the idea that any moron can run it because it's point and click. This isn't true; it takes more work to effectively admin an NT box than a Linux box.
There have and will continue to be worms. Worms are most successful at any point of monoculture. (sendmail; bind; IIS) The solution, then, is not dominance... but diversity.
Apt and cron (Score:4, Informative)
Patch the holes that are inevitable. Patch them early.
Auto-update / auto-patching BAD!!! (Score:2)
Or any other form of auto-updater. Remember, Code Red and Nimda used holes that were patched months ago.
No way - this is a very bad solution for security. While at first this would seem to be an absolutely good idea, in reality there's a number of really nasty security problems here.
First, it convinces you to be lax about security. I mean, if the Auto-updater is handling the job, you probably won't check it out too closely since it's not nessisary. But with patches sometimes comes new holes, and new procedures for properly securing a box. These are jobs that require human intervention.
Second, a new class of exploit comes along - using whatever proceedure you can make work, upload a new patch to the ftp server with some less than obvious holes in it. Sure, someone is going to spot it - maybe hours, maybe a couple of days, but it WILL get spotted. As admin, will you know if your box was one that grabbed the bad stuff? (Note, I said upload it to the ftp server, that's not the only exploit - various redirection techniques could be used too.) If tons of people moved to the auto-update idea, there'd be the potential for a lot of exploited boxen quickly.
And third, there's the issue of reviewing patches / updates. Sure, lots of people have viewed them. If it's security related, you should be viewing them too, or at minimum the 'readme' or equivalant.
Fourth, what update time are you planning? Once a month? Once a week? Daily? If it's less than daily, then you've got a problem - of you do grab a buggy version, that gives someone time to attack. And if it's a week before you check again, that means they've got pleanty of time to use your machine as a base to launch more attacks from. Plus, once they have the machine, you may only THINK you are still doing updates ;-) (It's always better from the attacker's standpoint to make things seem just fine and dandy :-P )
I'm sure there's a lot more that could be added to this list - this is just the problems off the top of my head. But those problems alone are enough to really screw things up.
From earlier in the day... (Score:2)
This is a pretty pathetic ask/.
I'd like to see 'White Hat' worms... (Score:2)
Unfortunately, doing constructive work (i.e., fixing the security hole) is always more difficult than doing destructive work (e.g., rm -rf /). But worm/virus writers seem to have plenty of time on their hands...
Re:I'd like to see 'White Hat' worms... (Score:2)
Except for the uninstalling part :-), it's been done. Try a google search for "cheese worm".
Re:I'd like to see 'White Hat' worms... (Score:4, Funny)
Then you get black worms that exploit vulnerabilities in white worms, white worms that search for black worms and destroy them, black worms that hunt black-hunting white worms, grey worms that fix your security hole but extract a "payment" in the process, grey worms masquerading as white worms, black worms masquerading as white worms, white worms that inadvertantly do damage while trying to do good, black worms that exploit new holes left by those white worms, and pretty soon you've lost track of what worms you thought you had, what worms the white worms told you you had, what the grey worms have taken, and what the black worms have done.
It's much better to fix your own security problems, and not depend on some worm that says it's white.
Re:I'd like to see 'White Hat' worms... (Score:2)
I think something like this may be inevitable. You may even get parasites on the worms. So long as they don't turn out like the viruses in Hyperion...
Re:I'd like to see 'White Hat' worms... (Score:2)
Of course. However, we all pay the price (direct, in network slowdowns, and indirect, in the threat of government regulation) for sites which do not fix their own security problems. How should we respond?
An instructive analogy: Suppose you notice that your neighbor's house is on fire. This is obviously a big problem for your neighbor, but it's also a big potential problem for you -- left uncontrolled, the fire could easily spread to your house. You try to alert your neighbor, but get no response. Does it make sense for you to call 911? Perhaps even use your own garden hose to try to control the fire? Of course; anyone would do this, and nobody would say you were doing anything wrong.
On an internet thriving with worms of all greyscale values, properly administered sites won't need to worry about them, and improperly administered sites will hopefully get dogpiled so quickly that they'll either be forcibly patched or crash in minutes. When the vast majority of sites are being properly administered, all flavors of worm will starve for lack of prey.
Re:I'd like to see 'White Hat' worms... (Score:2)
These color distinctions come not from skin tone, but from the color of hat these hackers wear.
Re:I'd like to see 'White Hat' worms... (Score:2)
And apparently, this factual informative comment "violated the postercomment compression filter.", whatever the fuck that is.
Smaller market share? (Score:2)
I thought apache had a majority share of the web server market. One that has been hit by worms, and those worm writers usually choose IIS despite it's smaller market share.
It could be because IIS has more exploits...
Wrong! (Score:2)
Many people run IIS without knowing it, so i think there are much more vulnerable machines out there than just the webservers.
Granted, IIS probably does have more exploits, but the real problem is that windows users usually aren't on top of patching them up. There are plenty of exploits out there that exploit linux, but there aren't as many issues because admins patch regularly, and the smaller market share.
Captain_Frisk
Re:Wrong! (Score:2)
We've been over this before. [slashdot.org] Windows 2000 Professional never installs IIS by default. It must be explicitly installed by the user. And it's not in an obvious place, either. So if the average user doesn't know where to look, it won't happen by accident.
Re:Smaller market share? (Score:2)
Think about it. Microsoft's entire appeal is based on ease-of-use; zero administration, wizards, automatically opening your attachments, and so on and so forth. This philosophy sells their servers and server software as well. So people who are used to MS products think they can set up a server, turn it on, and pretty much forget about it.
The MCSE graduates know better, of course, but they're so expensive to hire. Meanwhile, Linux and *nix in general almost always require a degree of problem-solving ability in order to set them up and get them working. This is part of the reason why they have a smaller market share. However, it also means that most people who take the time to install *nix and get it working on a network (not all, but most) are also going to be vigilant about keeping things patched and secure.
Maybe that's the nature of it, maybe not. But I'm convinced that there's something in the culture of *nix that drives its adopters to keep things patched, updated and secure, while the culture of MS users is to buy it, install it and let it do your work for you.
Re:Smaller market share? (Score:2)
You are right on and ought to be modded up.
Following your line further, the real danger is that as *nix attempts to become more popular by becoming "easier to use", it will succumb to some of the same pitfalls that plague MS.
I have to hope that we can prove the old adage wrong - you know the one - every programmer does - I forget who said it first
Re:Smaller market share? (Score:2)
Apache's market share includes the Apache installations running on Solaris, AIX and even Windows, not just Linux.
Re:Smaller market share? (Score:2)
Right. And worms, virii and other popular afflictions of NT is wrong to. Most of the worms and virii have been infecting outlook and IIS, not Windows. So to on the unix side. And the vast majority of Apache is running on unix flavors.
My comment is a fair one, even if you do have a lower uid than mine.
Ignorant Question: (Score:3)
It seems like every time you get input from the outside, you would only accept it in segments of a known length, and whatever was longer would just wait for the next "get" or whatever. At least this is the case in my (obviously limited) socket programming experience. So when some program is hit with a buffer overflow error, does the team of programmers smack their collective head and say "d'oh"?
Re:Ignorant Question: (Score:5, Informative)
Instead of using gets(), you use fgets(). Use strncpy() instead of strcpy(). And so forth. The only real difference between these calls is that the "safer" one lets you specify a maximum number of bytes to copy. So you know you can't copy a string that's larger than your destination buffer (and you use sizeof() or #define's to ensure you have the proper buffer size) and thus start overwriting executable code.
This is all high school level programming. Anyone that does it deserves to be strung up for professional negligence. As many others point out, one of the first large distributed cases of a buffer overrun exploit was 13 years ago. So it's not like this is a new thing.
And yes, there are probably some Unix programs running around with buffer overrun exploits in them. They've been largely weeded out over time though and, to some extent, Unix's permission scheme avoids most serious issues, at least when services are installed properly.
The real key difference between Unix and Windows though is very, very deep assumptions. Unix assumes that the user cannot be trusted (thou shalt not run as root), nor can any external input. Windows assumes that everyone will play nice. Since the reality of the world is that there is a significant fraction of people who will NOT "play nice" it invalidates coding under that assumption. Thus the repeated security exploits using Microsoft tools and services - which weren't designed from the ground up to distrust the input given to them.
The plus side of "play nice" is that it's faster to code and you can put in features which would never, ever fly otherwise, like automagic remote installation of software. Or executing email attachments automatically. All that stuff that users think is "wow cool nifty" until someone does something they don't like.
Re:Ignorant Question: (Score:2)
Re:Ignorant Question: (Score:2)
Re:Ignorant Question: (Score:2)
It may seem trivial problem, but it is actually very hard to solve in practice. The C string API is simply poorly designed -- it is way too easy to mess up. It's not a matter negligence, people are human and make mistakes; thinking good programmers are exempt is pure hubris.
The real solution is to expect, and learn to live with buggy code.
Remotely accessible programs should run in chroot jails with the bare minimum of capabilities.
Languages should make it harder to screw up. Less error prone string handling in languages such as perl, Java and even C++, are helping. Java has even more potential with its untrusted code security model.
And yes, there are probably some Unix programs running around with buffer overrun exploits in them.
Undoubtably. Many more than you'd think. And the vast majority won't every be found or fixed, because the program is not suid or remotely available.
Unix assumes that the user cannot be trusted
This assumption is broken by suid programs. They say a user is trusted to use me, but only to do something safe. It makes the implicit assumption that an suid program will only do what it was written to do. Secure systems must ensure that when these programs are inevitably comprimised, the damage is contained.
Re:Ignorant Question: (Score:2)
Yes.
My question: isn't it sort of a bug that gets() and strcpy() are still there in the standard C library? I would like, at a minimum, to see these cause a compile-time warning. It will be a long time before we can expunge all calls to these functions, but it might go quicker if we can get the compilers to complain about them.
Has anyone looked at doing this?
steveha
Re:Ignorant Question: (Score:2)
Sadly, nowadays things are different and we must deal with tiresome security problems all the time. But it was easy to get into the habit of programming in a non-security conscious way, because for many years it really wasn't a problem at all.
The C programming language was very much a part of that ethos. It was simply not designed to consider the buffer overflow problem. The size of buffers was almost never checked in early C programs.
And there are many cases other than the input of text where buffer overflows can occur. For instance, sprintf is a common function used to build up a string from smaller pieces. You use it by saying:
sprintf(destination_string, format, args);
The format determines the way the arguments are put together to create the string. If you have a destination string of 1,000 characters, and the string being built up contains 1,200 characters, you have an overflow.
The solution is to use snprintf, which is the same but includes a limit on the number of characters that are added to the string. But that means that every time you want to build up a string, you have to remember to use snprintf and add the count. If you've been programming "the old way" for a long period of time, it's easy to forget to do this.
The way I work around this problem is by building my own sprintf(), which automatically uses snprintf to build up a string with the maximum buffer size I normally use. So I can program "carelessly" but be protected at the same time.
As you can see, it's not just the size of the input string, it's how it is combined with other strings using functions like sprintf() that's the problem. And because it's a big pain to calculate all this out, it's no wonder programmers tended not to do it - until they got persuaded by tiresome security issues, that is.
Hope that helps.
D
Re:Ignorant Question: (Score:2, Interesting)
The problem lies not in the realm of receiving the information, but actually processing it. What do you think happens after you've received all the necessary data chunks for the requested URL? They're put together and treated like a string, then parsed out for various pieces of data (the path to the file being requested, the type of file based on MIME types, any data parameters (passed from a form, for instance), and any other interesting information your server may be looking for). Now, with insecure coding practices, it's very easy to get a buffer overflow simply by doing something as innocuous as a call to sprintf() (because sprintf doesn't do any bounds checking). The really dangerous part, however, is when the target string is on the stack. Now, when that buffer overflows, a carefully constructed overflow string can easily put executable code into the stack and change the return address on the stack to point to the beginning of that executable code. This is sometimes referred to as "smashing the stack". If instead you're dealing with heap-allocated buffers, it's harder to get code executed, but you can still just as easily cause an access violation and kill the server anyway.
I'm not trying to pick on sprintf directly, because there are a ton of other potentially unsafe (any unbounded string operation, for instance) or always unsafe (gets, fgets, any function that expects a string to be formatted in a certain way, etc) functions that are used commonly. In fact, too many people use these functions without even knowing that they're opening themselves up to major problems.
One way to mitigate the possibility of having a buffer overflow in your application is by always using bounded string ops (snprintf, strncpy, etc) (note that strncat is a special case, in that the 'n' refers to the amount of chars to be appended, not the size of the target buffer). Another way is to simply not use the completely unsafe functions, like *gets(). These won't guarantee that you'll be safe, but it's a start. There are plenty of resources out there [google.com], so if you're interested, I suggest you do some reading.
Re:Ignorant Question: (Score:2)
Subtle bug? (Score:2)
Just a quick hunch...
Re:Subtle bug? (Score:2)
Find a *root* identitied server. (Score:2)
Now, this doesn't alleviate all the problems of course, because even with "normal user" access a person can still do some damage. The web pages are probably owned by that normal user, so with normal access a person could alter your content. The normal user could set up cron jobs for himself such that he attacks other machines later, and thus you can still get propigation without root. So this still leaves open the possiblity of having DNS attacks (since being a part of the attack doesn't require root privilieges, just any user will do.) But it doesn't really leave any way to mess up the target machine permanently. You couldn't alter the httpd program, for example, since it isn't owned by the same user as the user ID it runs under.
At worst, you lose the web pages themselves, but most likely you have those copied over from some other location as part of your "I'm going to edit in a scratch area and then install these changes for real after I try them out" technique.
Re:Find a *root* identitied server. (Score:2, Insightful)
For a moment, this didn't ring true. Why? Because the capacity of a local user to utilize a local root exploit (and thus render your argument invalid) is high.
But then, I realized something. Open Source software encourages diversity. Apache may be running on Windows, Debian GNU/Linux, Redhat, OpenBSD, FreeBSD, etc... etc... And the root exploits are all different. Who are you going to pick on? All of them?
The worm we're seeing floating around the MS community are exploiting lots of known bugs in one fell swoop. Virtually all Windows installations except those secured by some smart users and some smart admins are vulnerable to one of these attacks. Thus, once again, the Open Source world could have a worm that used a collection of exploits to root many kinds of boxes, right?
Wrong. The memory footprint and coding skill this would take would make the worm look a lot more like "Microsoft Office for Every Platform" than the Morris Worm. That's because the vulnerabilities taken advantage of are most often in a variety of particular programs rather than some standard API or a few known awful (*cough*Outlook*cough) offenders. If a kernel version or the last few X11 versions had some huge flaws, or maybe Gnome or KDE, then we have a chance to worry. But you know what? The only one of those that Apache is involved in at all is the kernel. Server machines s often do not have X11, let alone Gnome, or KDE.. etc.. etc..
So my extremely longwinded point is: We aren't immune, but the kind of attack that we're seeing on Windows right now is hard against Open Source Software. Infinite Diversity in Infinite Combinations.
Re:Find a *root* identitied server. (Score:3, Informative)
Theoretically, if you're system is ship shape, then only root, or someone with root access, can REALLY fuxor it up. However, there are many levels of fuxored below "REALLY fuxored", and no system is 100.0000% perfect. Unix is a security nightmare. It's security model is decrepit and is only being patched / kludged into anything resembling reasonable security. I fear that it is too established to be replaced with something completely different at this point (i.e. something that was still unix, but fundamentally different in security model).
In general, I don't think it's a good idea to measure security success compared to the gimp of the security world (MS).
Re:Find a *root* identitied server. (Score:2)
Why do you say that? Certainly traditional security is simple, so you can't do the fine-grained things that other systems allow (not really true anymore with capabilities, but those aren't widely used or entirely standard). But simple has it's advantages - there are fewer ways to mess things up.
Re:IIS doesn't need to run as root (Score:2)
Microsoft vs Linux (Score:2)
Lets face it Linux comes and has come with ipchains and now iptables for firewalling and many other UNIX flavors have similar features. Linux and the UNIX community think about things like proxy firewall combinations, where Windows is only now starting to think about this. It is not until the release of XP (or the anticipated release as it is not out) that windows is by default including a firewall.
People in the unix community also tend to be more aware of what is going on on their system. They have logs and there are tools to view them.
While I do not dismiss the possiblity that if Linux / UNIX got to be as popular as windows then there would be more 'attempts' I think that because of the nature of Linux you would have a much harder time of spreading a worm like code red.
A good UNIX administrator is going to spend time in configuring his web server and securing it. If they do not think about this then they are no good.
If you are wondering how secure your computer is try these two site. They'll help, but don't try this at work or you may piss off your admins. https://grc.com/x/ne.dll?bh0bkyd2 or http://scan.sygatetech.com/
Wha't the next step for UNIX security? (Score:2)
I wonder if there isn't a way of generalizing this to allow more sweeping, more generalized expressions of security rules. A UNIX install has soooo many little apps, and so many points of contact for everything, it's sometimes hard to say "I want all apps that could access X to have permissions Y, or go through acces point Z." TCP wrappers are a good example of the kind of thing I'm talking about -- they provide a single point of access and control for all things TCP, and they make it much easier to set up very broad rules that you know cover all possible cases.
Am I making any sense here? How might an OS take on this issue in the general case? It seems like one next logical step for UNIX security.
Re:Wha't the next step for UNIX security? (Score:3, Informative)
The other big move is to support ACLs - access control lists - so you can say "fred, george and harry can write this file, members of group foo are only able to read it, and members of group bar aren't able to do anytying with it".
SELinux, the LSM project, and the like, are the sort of thing we're aiming at....
The real reason. (Score:2)
Consider how many Unix users would actually just open their emails and run attachments blindly. I would venture that there are a ton more Microsoft users that actually do just that!
difficulties? (Score:2)
what difficulties?
whenever an inexperienced user brings up a redhat 7.0 or lower box on our network, it is exploited within 12 hours. within 24 hours i have received email from admins on other networks informing me that the redhat box has been probing their network. 1 minute later i have informed yet another user that it takes more to do my job than booting off of cd and following instructions on the screen.
someone out their has already taken advantage of the various vulnerabilities found in older distros.
lessons learned? i am reminded of something my brother told me:
Having your own box appeals to the pioneer spirit: your own plot of land to develop as you please, fighting off the savages, protecting from the elements.
In other words, every time you run software which other people will somehow have access to (users running desktop software, server software connected to the internet , etc) you will need to constantly monitor and upgrade that software.
The Morris Worm (Score:2)
Let's not forget that what was probably the first worm, the Morris Worm, was released on Unix machines. I don't remember the year, but it was in the early days of the Internet when about all there was out there was Unix and VMS. The lesson that the Unix community took away from this and other incidents was that they needed to secure their machines and tighten up code. The point here is that no system is immune. When I first started out in the Internet field, almost all attacks were launched against Unix and VMS machines because that's about all that was hooked up to the Net on a constant basis. So, don't get smug just because Micrsoft is victimized today. After MS dies a firey death, something else will become the dominant system on the net and that will be the most attacked system.
Brought down the entire USA (Score:2)
in 1988 during the Morris worm. But it was
mostly just universities and few military.
It could happen... (Score:2)
Fortunately the default installs of most of the mainstream distributions are getting more secure as time goes by. And while RedHat traditionally isn't quite as easy to set auto-updating up for as Debian is, it's still pretty easy to keep up with the security patches for it. I'd really like to see the package maintainers package at least some of the more traditionally insecure packages (*Cough*Bind*Cough*) in ultra-paranoid configurations, say, statically compiled and chrooted. It hasn't been enough of an irritation for me to go do it myself though.
We all pretty well know, though, that security is more what the user does with the OS rather than how inherently "secure" the OS is out of the box. FreeBSD is by reputation one of the most secure OSes available but I could take that thing and install a bunch of servers with holes in them and be no better off than if I was running Windows 2000 doing the same thing.
Re:It could happen... (Score:2)
(How hard can it be to figure out a way to generate some extra revenue from this? "And for only $5/month, we'll set you up with the the Head Patch Automated Reinsecuritator Mechanism.")
Bad Comparison (Score:2)
Now if Linux had windows' market share, it would have to come pre-installed with a new PC and not require the user to do much more than just use the GUI. Which is fine as far as I'm concerned, but we can also assume a Linux dominated universe would be full of unpatched servers too.
Maybe untreated Windows exploits are heading toward exinction. Its easy access to the internet that has created such a huge market for anti-virus software. Maybe we'll start seeing Windows shipping with an MS or a third party patch manager in the near future. Or something like NAV with a patch checker. "No viruses found, you are open to these attacks, please goto this URL to download the patches."
Hard to create a Unix worm??? (Score:2)
*nix Worms? Great Skill == More Maturity (Score:2)
Most people who have the skill to code worms for more secure and robust *nix platforms are probably mature/responsible enough through their experience to not do something so utterly foolish. However, if they do decide to do so, they end up trying to do a positive thing for the community! (Anyone remember those Linux worms that FIXED the exploits they took advantage of before moving onto the next box and cleaning themselves up?) Besides, look at the very few malicious worms we have seen for *nix platforms. They didn't last long. The OSS community has a VERY quick response time to big problems and the admins are generally more skilled and knowlagable about applying patches.
I say, let's enjoy this while we can. It's kind of amusing to see MS admins scurry around, trying to stick fingers in all the leaks. It's risky to say "it serves them right", but that's for only weighing mundane factors in deciding what platform to use. And for those companies that reject OSS products, well, they get what they deserve for thinking "stuff that doesn't come from a company mustn't have any quality". Pah. Worms with the scale of NT aren't a concern for us. Let's parade this around as a reason to support and use open software.
Why don't people attack UNIX more? (Score:2)
So why don't people write more UNIX worms? I think the first big problem with a UNIX worm is the portability problem: getting a worm that runs well on all of the different CPUs, UNIXes, Linux distros, etc. out there would require a pretty basassed coder. Anyone good enough to do so probably wouldn't waste his time on a worm since he could get paid obscene amounts of money for coding something more productive.
On a more positive note, I think worms generally target Windows because computer users in general don't really like Windows. Jokes about Windows being unstable/buggy/insecure/slow have gone from being a subsect of geek culture to a repetitive theme in popular culture. People run Windows mostly out of necessity, because it is the only desktop OS that provides access to a large variety of commercial software, and runs on cheap, non-proprietary hardware. People who use UNIX do so because they want to, and they like doing it; therefore they are less likely to produce something as randon as a worm. (I am leaving crackers/s'kiddies out of this as they have far different motivations.).
2 Reason; Not Market Share (Score:4, Interesting)
While client market share for Windows is undisputed, Apache has close to 60% of the web server market. I haven't received a single readme.exe attachment.
Current Nimda stats are:
26900 attempts on 2 servers.
Apache (on *n*x, anyway) is not vulnerable to worms in the same way IIS is since it runs as notroot.somegroup. The only thing an Apache web server worm (on *n*x) could do is muck up the web server.
*n*x mail clients don't (at least yet) do a
file this_attachment
if file is ELF, or a.out
chmod +x this_attachment
execve this_attachment.
This isn't to say *n*x is immune. Just why Win* is not. Not because of market share.
Well.. (Score:2)
As for our 'goals'.... who's goals are those? Who wants linux everywhere? Use the right tool for the right job. If MS actually made something that was better for a job, I'd use it. (IF.. big IF)
Lesson One (Score:2)
The biggest obstacle, AFAICT, is making solid security Ease-Zee.
Certainly many commercial outfits haven't successfully solved this problem yet and there are still plenty of opportunities for spoofed trojans with fake internal certifications.
I mean, when I download a package, it usually contains its own references to valid signatures, etc. Or, the md5 signature is kept in another file, but on the same ftp server.
Better are package maintainers that digitally sign their products. I'd like to see more of that, maybe in conjunction with multiple certifying authorities that can verify the signator's credentials. I don't need a system that compromises the anonymity of me or the package writer - just something that verifies that a package originated with a consistent unique individual.
Do modern CD distros of GNU/Linux and other OS come with anything like a set of multiple certifying authorities where package writers can register signatures in multiple places to minimize the chances that a fake can be passed off on innocent downloaders?
Rest easy but not blindly (Score:2)
This worm was fixed about as quickly as possable. The only real problem was getting the fix out as the worm had sereously disrupted the primary means of getting the patch out.
The time delay for Microsoft patches is a great deal longer and is due to develupment delays not distrubution delays.
There is also a delay due to NT admin fears the patch may disrupt the system. I doupt this is a realistic fear but I have heard it once or twice. I think this is more or less the end result of the ignorence Microsoft premotes amoung NT admin. That ignorence is probably responsable for more problems than the software itself.
In short once a worm is created once known it should be a short time before bug fix.
But not blindly....
The reality is worms are a low likelyhood. You should stand ready for a whole range of issues worms are in the bag.
Viruses are even less likely and nearly impossable. However IF we go getting paranoid about worms to the exclusion of all else... Viruses viruses viruses... becouse we are looking the other way.. won't you feel dumb..
Keep an eye do the maintanence, read the logs, read slashdot, bug trap and so on.. keep an eye on the issues related to your system.
Worms aren't the only problem. They are an issue. They aren't the only issue.
Just don't get cought with your shorts down.
And... don't wait for someone to fix it... yeah it'll happen in 10 or 20 minuts (vs the 10 to 20 days for Microsoft) but as we learnned with the last Unix worm..
Min 1. You learn about defect
Min 2. You look for someone fixing it
Min 3. You find someone
Min 4. You wait
Min 5. You wait
Min 6. It's done.. you download
Min 7. Your still downloading
Min 8. Hmm the network seems a bit slow.. your still downloading
Min 9. Why is the network slow?
Min 10. Your crashed... you got the worm before you got the patch... you lose try again..
If someone fixes it first.. horray... if not.. don't wait...
However rember this stuff requires a major deffect in the system to work. It'll only effect one platform and only one version of that platform.
(With Linux it'll hit many distrobutions unless it's a distro screw up and not a real software defect..)
Re:Rest easy but not blindly (Score:2)
Worms in the Unix world are rare (Score:2)
With no guarantee of any given system calls, any given system libraries, any given applications, any given directory structure, any given TCP/IP stack, any given version of any given implentation of any given service, any given architecture or any given dialect of any given scripting language, worms have a limited scope to work with.
The "Original" Internet worm was so dangerous, because at that time there was less diversity. Certain standard daemons were virtually guaranteed to be running, for example, built from basically the same source.
Therein lay the danger for Unix - without diversity, a single virus or worm can cause untold damage. If it can affect one machine, it can affect many.
(Biologists have woken up to the same lesson. For years, it was preached that simple systems were more stable than complex ones, but it was learned the hard way that that was not the case. Biodiversity offers protection, because it inhibits the spread of hazards. By making it non-trivial for an infection to pass on, you could guarantee that real-world viruses were self-limiting in scope.)
Linux is relatively safe from virii and worms, for that same reason. There is sufficient diversity to ensure that propogation is non-trivial. The very "irritation" that turns away so many is Linux' greatest shield. With Windows, it's trivial to infect a registry, because there is only one and there's a standard way to access it. Linux has many "registries", and much code that people use won't be registered anywhere at all.
Then, there's libraries. Windows 9x uses certain very standard libraries. If it's a 9x OS, you know what you can expect. For Linux, you've got elf & a.out formats, libc5, glibc 2.0/2.1/2.2, XFree 3/4, Bind 4/8/9 (or any number of alternative resolvers, including the one built-in to glibc), etc. You really don't know what to expect.
Scripting languages? There's no telling WHAT anyone'll have. The only thing you can be sure of is that there will be a
To stay resident, the virii or worm also has to find a place to stay. Not easy to do, with Linux. With Windows, you've a choice of FAT16 or FAT32. Oh, and maybe NTFS, if you're using NT. With Linux, you could be using almost anything. Sure, people will probably use what's installed as standard, as FS migration is non-trivial, but that still leaves ext2, ext3, reiserfs or XFS, all of which one distribution or another uses.
Finally, there's security within Linux. But which security are you using this week? There's GRSecurity, LSM/SELinux, RSBAC, POSIX ACLs, various other ACL implementations, socket ACLs, and any combination of the above.
Oh, and that's not including intrusion detection software, honeypots, firewalls, and all sorts of other similar code.
In short, you can envisage a worm or virus which affects Red Hat 6.2 / Intel distributions that use the standard libraries and kernel. But you can't have a worm or virus which affects ALL running Red Hat Linux boxes - the variation is just too great. It gets much worse when you talk of all Linux boxes, and many many orders of magnitudes of absurdity greater when you talk of all POSIX-compliant UNIX kernels.
To answer the original question of "is the Unix community worried about worms", the answer is "that depends on how homogenious any person's network is". The "worry" level will probably be about the same as the homogeniety level.
As for the community at large, the answer is probably "no". The community at large has such a high level of diversity that there is no single threat which could affect every system (or even a significant fraction of them).
Re:Worms in the Unix world are rare (Score:2)
The one I was thinking of was capable of nuking most operating systems by injecting odd length packets (close to 64K in size).
There's more commonality than you might think in places.
Hopefully not (Score:2)
As far as the future goes, though, unless the various distributors become more and more security conscious (I believe that they are doing this), we may be at risk. Doing such things as running potentially vulnerable services as their own userid, turning off unneeded ones, and only opening ports with an actual service that needs it open to the outside may seem like common sense to hopefully all of us, but these are things that distributions should automatically do for the newbie users.
Doesn't anybody remember the Lionshead Worm? (Score:2)
How quickly we forget that Linux too is vulnerable.
Re:Doesn't anybody remember the Lionshead Worm? (Score:2)
ipchains -A input -i eth0 -p tcp -s any/0 111 -j DENY
Yes, linux is vulnerable. Simple recipe for keeping it safe: if you don't need it, turn it off. If you do need it, study the security history and upgrade the daemon if necessary. If it's sendmail, install postfix or configure it as non-root. If it's WU-ftpd try Pure-FTPD.
On a side note, a default install of most linux distros turns a lot of stuff on that shouldn't be running if it's world accessible. So, like NT admins, linux admins need to study their install and find out what's there that you don't want. Upgrades are sometimes needed, services need to be stopped. The good news is that all linux worms to date are nothing more than automated script kiddies so if you've kiddie-proofed your setup, chances are you're OK.
Comment removed (Score:3, Funny)
Realm of Possibility? (Score:2)
Ramen? 1i0n? Adore? Sound familiar? It's far from the "realm of possibility" - they've already been done. And these worms haven't been eliminated, either. I work in network security, and I see SunRPC scans and DNS scans, and a whole slew of different kinds of scans on my network *several times an hour*. Yes folks, *hour*.
The fact is, people are running unpatched systems. And yes, a good majority of these systems are running Linux. The fact that the scans aren't letting up says that administrators:
A) Are too ignorant to know there's a problem
B) Too ignorant to fix the problem
C) Don't give a shit.
The thing is, the Open Source community is quick to act on these security problems and crank out a fix. In the case of Microsoft, the worms are usually a lot more destructive, thus, they receive more attention.
It's quite sad when people can't patch a two-month old exploit, however.
How about a cross-platform worm? (Score:2)
About the only worry I have about worms is all the impact on the network as a whole and the PITA my job is whenever one gets out.
I'd be DAMNED worried (Score:3, Funny)
If some of you hardcore *nix users would take showers more often than major holidays this wouldn't be an issue.
Those of us who have to sit in stuffy cubicles within a 10' radius of you thank you for your consideration of this matter.
RTM - NOT the first worm... (Score:2)
In 1980 Xerox Parc published a paper called 'Notes on the "Worm" Programs -- Some Early Experience with a Distributed Computation' by John F. Shoch and Jon A. Hupp. This describes some WORM programs that were written at Xerox PARC and used for useful things. Unfortunately an error in one of their programs caused a lot of dead machines.
I think that the BITNET christmas card "virus" of December 1987 predates the Morris Worm of 1988. This was more of a trojan than a worm, but when you ran the "card" it mailed itself to everyone it could.
Neither of these was Unix based.
Z.
A few points: (Score:2)
Re:A few points: (Score:2)
Re:A few points: (Score:2)
That was in 1971, I think. Unix has come a very long way since then, including many security patches. One advantage it has is that it's 10 years older than DOS/Windows, so more holes have been patched. Another is that it was on multi-user computers from the beginning, while I think MS's first OS for servers (Win NT) first came out in the 90's -- so the unices may have a 20 year lead in thinking about security. And finally, some unices are open sourced, and even the proprietary ones are far more open about the way things work than Windows -- so there have been more friendly eyes looking for holes.
There used to be mainframe OS's that were designed for security from the ground up. I wonder how those would stack up against the unices and Windoze where security was patched in after the original design was set? I think not so good anymore -- they haven't been exposed to decades of probing...
C came out of a similar environment at about the same time. Hence all the standard string functions that simply trust the users not to do something that overflows the buffers. Actually checking for overflow ate up too many cycles, so they trusted the users instead. But why are we still using these unsafe functions?
Is the Unix community Worried About Worms? (Score:2, Insightful)
hybrid vigor (Score:2)
MS OS's, on the other hand, install to almost exactly the same configuration every time, and users don't usually bother to change many options. And there are only a handful of MS OS's, compared to open-source land.
In the wild, hybrids seem to be more resistant to disease, more adaptable, and generally hardier. Linux/BSD are mutts.
Worm propogation is exponential (Score:2)
Error (Score:2)
Error: Unjustified statement. Requires backup evidence.
Ways to avoid the pitfalls (Score:5, Informative)
Following these steps, I think that distributions will be fairly safe from any discovered server vulnerabilities, and probably most client-side ones, as well.
Never a better time to be a girl (Score:2)
Re:Never a better time to be a girl (?) (Score:2)
No. (Score:2)
A worm that overpowers apache and executes code on my machine as user 'nobody' (The user my apache runs as) really doesn't concern me. I suppose it could delete most of my /tmp partition.
Marketshare doesn't matter (Score:2)
That's right: marketshare doesn't matter. And here, I'm taking "marketshare" to mean either (a) the number of servers sold or (b) the number of servers running.
The reason why marketshare doesn't matter: every server connected to a TCP/IP network is "touching" every other server connected to that network. Marketshare has no bearing on which servers can possibly infect which other servers in a population, only connectivity does. Essentially, the "population" of unix servers on the internet all "touch" one another, just like the population of all IIS servers "touch" one another.
That said, it hasn't really been a banner year for Linux/Unix/BSD worms. We've seen adore [sans.org], l1on [sans.org], cheese [cert.org], ramen [sans.org], sadmind/IIS [cert.org], lpdw0rm [lwn.net], and x.c [dartmouth.edu]. Absolutely none of these worms ripped through the Linux/Unix/Solaris/BSD population. This is indisputable. The question is why does one population have resistance, while the other doesn't? I think the answer is diversity on four levels:
Re:Grammar (Score:2)
Its "a need."
ahem....
-Restil
Re:keep your code clean? (Score:2, Informative)
Re:keep your code clean? (Score:2)
Re:keep your code clean? (Score:2)
The basic problem is that it's a very complex task to make things look and feel simple to the end user. Because of that, the Microsoft server is a great deal more complex than Apache. And it exposes more services, which to an Apache user would be installed on a case by case basis. Note that the problems we've seen in IIS are generally caused by auxillary stuff like the Index Server. That exists to make things easy, yes. But it also increases complexity, and whenever complexity goes up, the possibility of there being holes goes up even more.
Hope that helps.
D
You can be lazy on any platform. (Score:4, Insightful)
Re:You can be lazy on any platform. (Score:2)
If someone doesn't patch their Windows systems why would they patch their Linux systems? Doesn't matter if the patch is out 2 seconds after the bug is revealed if the admin doesn't take notice and act.
if they are too lazy to patch their windows systems then they are probably too lazy to install linux. currently *nix attracts a different kind of user. this might change in the future, but right now i think your average linux user is a bit more informed and competent.
Re:You can be lazy on any platform. (Score:2)
I'm sure someone's already come up with a script that can automatically check the GPG signatures of downloaded packages.
Of course, I would never run automatic software updates on a production server. That might be fine for desktop machines though.
Re:same goes for virii.... (Score:2)
Re:Can't happen (Score:2)
No, they don't. But many, many daemons and other long-lived processes run as root.
A quick scan of the processes on my machine right now shows kdm, X, kppp, pppd, cupsd and a few others.
On our production servers at work, resin runs as root - I have been reliably informed that it has to (at least, I assume our systems team are reliable - they were rather upset when two of us demonstrated ftping a file onto the server that allowed arbitrary commands to be run
Just because there's no-one sat at the machine, launnching xterms and applications as root, doesn't mean that there isn't a whole bunch of stuff running as root. A single buffer overrun exploit in a network-aware daemon running as root, and your machine is wide open, if you're not behind a firewall.
Cheers,
Tim
Re:darwinism at work in open source? (Score:2)
Re:darwinism at work in open source? (Score:2)
Re:Well... (Score:2)
Someone will write a worm that attacks not only Windows, but all variants of Unix as well. It will keep a database (or even download the information temporarily from a website) of exploits.
My point was that it would be a big (as in file size) worm, and then I added a little bit of humor at the end.
Re:better packaging = less vulnerability (Score:2)
A peer of mine is a sys admin for a group of Windows 2002 machines. Once a week AutoWindowsUpdate runs to automatically get all the security updates and install them. This is a check box for him. According to him, it will even download an update to IIS, stop the WebPublishing service, install the update, then restart the service -- all while he twiddles his thumbs and thinks about lunch. With this kind of automation, who knows what kind of holes and backdoors M$ is automatically installing for him, and who knows what data it's sending back? And how does he know that it's not installing a new worm? </fictional reprint> Oh wait, I forgot. It's differnet when Linux does something.
Re:Windows update isn't very good. (Score:2)
BUt you're generally right. Incremental updates to Debian are fairly small - I generally don't see more than 500k-1meg per session - more if I leave it longer (and MUCH larger in the above circumstance of both GNOME and KDE being upgraded!).