Is the Unix Community Worried About Worms? 516
jaliathus asks: "While the Microsoft side of the computer world works overtime these days to fight worms, virii and other popular afflictions of NT, we in the Linux camp shouldn't be resting *too* much. After all, the concept of a worm similar to Code Red or Nimda could just as easily strike Linux ... it's as easy as finding a known hole and writing a program that exploits it, scans for more hosts and repeats. The only thing stopping it these days is Linux's smaller marketshare. (Worm propagation is one of those n squared problems). Especially if our goals of taking over the computing world are realized, Linux can and will be a prime target for the worm writers. What are we doing about it? Of course, admins should always keep up on the latest patches, but can we do anything about worms in the abstract sense?" Dispite the difficulties in starting a worm on a Unix clone, such a feat is still within the realm of possibility. Are there things that the Unix camp can be learning from Code Red and Nimbda?
Linux has plenty of marketshare (Score:2, Informative)
What smaller marketshare? Check out the Netcraft [netcraft.com] survey if you don't believe me. I think better programming is the reason we aren't seeing any worms targetted at linux web servers.
something to remember (Score:2, Informative)
Apt and cron (Score:4, Informative)
Patch the holes that are inevitable. Patch them early.
by Robert Morris (Score:2, Informative)
Re:keep your code clean? (Score:2, Informative)
VERY Concerned (Score:1, Informative)
Re:Ignorant Question: (Score:5, Informative)
Instead of using gets(), you use fgets(). Use strncpy() instead of strcpy(). And so forth. The only real difference between these calls is that the "safer" one lets you specify a maximum number of bytes to copy. So you know you can't copy a string that's larger than your destination buffer (and you use sizeof() or #define's to ensure you have the proper buffer size) and thus start overwriting executable code.
This is all high school level programming. Anyone that does it deserves to be strung up for professional negligence. As many others point out, one of the first large distributed cases of a buffer overrun exploit was 13 years ago. So it's not like this is a new thing.
And yes, there are probably some Unix programs running around with buffer overrun exploits in them. They've been largely weeded out over time though and, to some extent, Unix's permission scheme avoids most serious issues, at least when services are installed properly.
The real key difference between Unix and Windows though is very, very deep assumptions. Unix assumes that the user cannot be trusted (thou shalt not run as root), nor can any external input. Windows assumes that everyone will play nice. Since the reality of the world is that there is a significant fraction of people who will NOT "play nice" it invalidates coding under that assumption. Thus the repeated security exploits using Microsoft tools and services - which weren't designed from the ground up to distrust the input given to them.
The plus side of "play nice" is that it's faster to code and you can put in features which would never, ever fly otherwise, like automagic remote installation of software. Or executing email attachments automatically. All that stuff that users think is "wow cool nifty" until someone does something they don't like.
Re:Wha't the next step for UNIX security? (Score:3, Informative)
The other big move is to support ACLs - access control lists - so you can say "fred, george and harry can write this file, members of group foo are only able to read it, and members of group bar aren't able to do anytying with it".
SELinux, the LSM project, and the like, are the sort of thing we're aiming at....
Re:Find a *root* identitied server. (Score:3, Informative)
Theoretically, if you're system is ship shape, then only root, or someone with root access, can REALLY fuxor it up. However, there are many levels of fuxored below "REALLY fuxored", and no system is 100.0000% perfect. Unix is a security nightmare. It's security model is decrepit and is only being patched / kludged into anything resembling reasonable security. I fear that it is too established to be replaced with something completely different at this point (i.e. something that was still unix, but fundamentally different in security model).
In general, I don't think it's a good idea to measure security success compared to the gimp of the security world (MS).
Re:Learning from Code Red? (Score:3, Informative)
1. People using *NIX systems are usually administering servers, or just love computers. The end result is that they're better (not nessicarily great) at keeping their machines patched.
2. People using NT/2000 often don't even realize they have exposed ports. The worst of the Code Red/Nimda infections are coming from machines on Cable/DSL...home users who probably don't even know their machine is a server.
3. Maturity. Any given piece of software will mature in features and stability/security. Most often, growth in security is sacrificed for features in commercial software. When software is free there tends to be less people trying to add marketing based features to a product. Most features come as modules which you must chose to install. With the focus on security, the number of vulnerabilities shrinks until there are virtually none.
4. Development environment. This may not be immediately obvious as a cause, but it is very relevant. IIS is written in C++, and many people think that C++ is better than C. The real truth is that while C++ provides many benefits, it also can make auditing code more difficlt. The language contains so many features that it becomes very difficult to trace a path of execution just by looking at some code.
I am sad to admit that every day I write code in C++, using MFC. My conclusion is that development is more difficult on Windows in C++ than on any other platform/language I have used. M$ has an idea of how an application should be laid out that very rarely fits my idea of how an application should be laid out.
Compare Apache with IIS. Apache has been around for quite some time now, it aims to be a decent general use webserver with a useful set of features. Things such as dynamic content and indexing are provided by various modules which communicate through a well-defined API. It's written in nice, linear easy-to-read C.
IIS has been around for a while, but the push is on features and integration with Windows. IIS integrates into many aspects of Windows, and it uses COM for it's extensions. Because all COM objects are handled at an OS level, there is much potential for a bad module to blow up the system.
Of course, even the holes in M$ software have patches available long before they become a headline for the day.
Ways to avoid the pitfalls (Score:5, Informative)
Following these steps, I think that distributions will be fairly safe from any discovered server vulnerabilities, and probably most client-side ones, as well.