Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Is the Unix Community Worried About Worms? 516

jaliathus asks: "While the Microsoft side of the computer world works overtime these days to fight worms, virii and other popular afflictions of NT, we in the Linux camp shouldn't be resting *too* much. After all, the concept of a worm similar to Code Red or Nimda could just as easily strike Linux ... it's as easy as finding a known hole and writing a program that exploits it, scans for more hosts and repeats. The only thing stopping it these days is Linux's smaller marketshare. (Worm propagation is one of those n squared problems). Especially if our goals of taking over the computing world are realized, Linux can and will be a prime target for the worm writers. What are we doing about it? Of course, admins should always keep up on the latest patches, but can we do anything about worms in the abstract sense?" Dispite the difficulties in starting a worm on a Unix clone, such a feat is still within the realm of possibility. Are there things that the Unix camp can be learning from Code Red and Nimbda?
This discussion has been archived. No new comments can be posted.

Is the Unix Community Worried About Worms?

Comments Filter:
  • by EllisDees ( 268037 ) on Friday September 21, 2001 @03:36PM (#2331396)
    The only thing stopping it these days is Linux's smaller marketshare.

    What smaller marketshare? Check out the Netcraft [netcraft.com] survey if you don't believe me. I think better programming is the reason we aren't seeing any worms targetted at linux web servers.
  • by CoreyG ( 208821 ) on Friday September 21, 2001 @03:37PM (#2331403)
    Worms aren't just a Microsoft thing. You should know(remember?) that the first worm ever written infected many *NIX systems (and the net in general) quite badly.
  • Apt and cron (Score:4, Informative)

    by Anonymous Coward on Friday September 21, 2001 @03:37PM (#2331409)
    Or any other form of auto-updater. Remember, Code Red and Nimda used holes that were patched months ago.

    Patch the holes that are inevitable. Patch them early.
  • by Robert Morris (Score:2, Informative)

    by maddogsparky ( 202296 ) on Friday September 21, 2001 @03:42PM (#2331461)
    Yeah. It was the classic example that we studied in my Computer ethics class. Sounds sort of like the nimd worm in that it had four different methods of spreading. The only thing that stopped it from being even worse than it could have been was a programming error that caused it to fill up memory and eventually cause the infected machine to crash.
  • by GregK72 ( 43118 ) on Friday September 21, 2001 @03:53PM (#2331550)
    I think that people could probably find exploits in Apache, Sendmail, etc... probably a lot easier since they can scan the sourcecode. From what I have read though, most of these worms & virii are not very complicated and are using relatively easy to exploit holes in M$ products. Most of these holes exist since M$ is trying to make life easier on the user by doing work behind the scenes (such as automatically calling an IE dll to render an HTML email). As work continues on desktop environments such as GNOME and KDE, I think that it is not unreasonable to expect to see exploits in those products being used. But since M$ products dominate the desktop market, I expect to find most people writing worms&virii for M$ environments.
  • VERY Concerned (Score:1, Informative)

    by MadCamel ( 193459 ) <spam@cosmic-cow.net> on Friday September 21, 2001 @03:55PM (#2331564) Homepage
    I am very concerned about UNIX/Linux worms. Not only is it possible, but it is probable. As much as I dislike Microsoft, they DO release security fixes for their products, usualy before a worm is written to exploit the vulneribilities. The same goes for Linux, BSD, and any other activly maintained operating system. So why are these worms causing so much trouble? Because the average user has no idea how their OS works, and no clue about security. With the recent advancements in user-friendlyness, the same thing goes for Linux too. For example, the statd worm family, which had rooted every insecure RedHat machine in 24.*.. With matters like this, it is not the OS that matters. It is the user/admin of the OS being clueless about security. Until users learn how to apply security patches, and learn to keep up with the latest security news, these things will be commonplace. I sincerely hope that this recent outbreak of particularly nasty worms will get more users and admins interested in keeping their machines secure.
  • by Zathrus ( 232140 ) on Friday September 21, 2001 @03:57PM (#2331579) Homepage
    Yes, it's trivially simple to protect against buffer overflows. But it takes some regimented coding to do it properly instead of taking the easy way out.

    Instead of using gets(), you use fgets(). Use strncpy() instead of strcpy(). And so forth. The only real difference between these calls is that the "safer" one lets you specify a maximum number of bytes to copy. So you know you can't copy a string that's larger than your destination buffer (and you use sizeof() or #define's to ensure you have the proper buffer size) and thus start overwriting executable code.

    This is all high school level programming. Anyone that does it deserves to be strung up for professional negligence. As many others point out, one of the first large distributed cases of a buffer overrun exploit was 13 years ago. So it's not like this is a new thing.

    And yes, there are probably some Unix programs running around with buffer overrun exploits in them. They've been largely weeded out over time though and, to some extent, Unix's permission scheme avoids most serious issues, at least when services are installed properly.

    The real key difference between Unix and Windows though is very, very deep assumptions. Unix assumes that the user cannot be trusted (thou shalt not run as root), nor can any external input. Windows assumes that everyone will play nice. Since the reality of the world is that there is a significant fraction of people who will NOT "play nice" it invalidates coding under that assumption. Thus the repeated security exploits using Microsoft tools and services - which weren't designed from the ground up to distrust the input given to them.

    The plus side of "play nice" is that it's faster to code and you can put in features which would never, ever fly otherwise, like automagic remote installation of software. Or executing email attachments automatically. All that stuff that users think is "wow cool nifty" until someone does something they don't like.
  • by valdis ( 160799 ) on Friday September 21, 2001 @04:16PM (#2331717)
    The first step is POSIX 1003.1e 'capabilities', and is already partially supported in the current Linux kernel. Basically, it breaks the 'suser()' check for "are we running as root?" into lots of little checks: "are we allowed to open any file?" "are we allowed to use raw sockets?" "are we allowed to kill() other processes?" and so on. So instead of (for example) 'ping' being suid just so it can use a raw socket, it would have CAP_NET_RAW, and if subverted, the only thing the attacker gets is the ability to send raw packets (which may be leveragable, but makes it a LOT harder than just execve'in a root shell on the spot).

    The other big move is to support ACLs - access control lists - so you can say "fred, george and harry can write this file, members of group foo are only able to read it, and members of group bar aren't able to do anytying with it".

    SELinux, the LSM project, and the like, are the sort of thing we're aiming at....
  • by ZanshinWedge ( 193324 ) on Friday September 21, 2001 @04:43PM (#2331868)
    This is fundamentally wrong.


    Theoretically, if you're system is ship shape, then only root, or someone with root access, can REALLY fuxor it up. However, there are many levels of fuxored below "REALLY fuxored", and no system is 100.0000% perfect. Unix is a security nightmare. It's security model is decrepit and is only being patched / kludged into anything resembling reasonable security. I fear that it is too established to be replaced with something completely different at this point (i.e. something that was still unix, but fundamentally different in security model).


    In general, I don't think it's a good idea to measure security success compared to the gimp of the security world (MS).

  • by jd10131 ( 46301 ) <james@NoSPAm.emdata.net> on Friday September 21, 2001 @04:44PM (#2331875) Homepage
    While *NIX systems are not impervious to various forms of attack, they are less vulnerable for several reasons.

    1. People using *NIX systems are usually administering servers, or just love computers. The end result is that they're better (not nessicarily great) at keeping their machines patched.

    2. People using NT/2000 often don't even realize they have exposed ports. The worst of the Code Red/Nimda infections are coming from machines on Cable/DSL...home users who probably don't even know their machine is a server.

    3. Maturity. Any given piece of software will mature in features and stability/security. Most often, growth in security is sacrificed for features in commercial software. When software is free there tends to be less people trying to add marketing based features to a product. Most features come as modules which you must chose to install. With the focus on security, the number of vulnerabilities shrinks until there are virtually none.

    4. Development environment. This may not be immediately obvious as a cause, but it is very relevant. IIS is written in C++, and many people think that C++ is better than C. The real truth is that while C++ provides many benefits, it also can make auditing code more difficlt. The language contains so many features that it becomes very difficult to trace a path of execution just by looking at some code.

    I am sad to admit that every day I write code in C++, using MFC. My conclusion is that development is more difficult on Windows in C++ than on any other platform/language I have used. M$ has an idea of how an application should be laid out that very rarely fits my idea of how an application should be laid out.

    Compare Apache with IIS. Apache has been around for quite some time now, it aims to be a decent general use webserver with a useful set of features. Things such as dynamic content and indexing are provided by various modules which communicate through a well-defined API. It's written in nice, linear easy-to-read C.

    IIS has been around for a while, but the push is on features and integration with Windows. IIS integrates into many aspects of Windows, and it uses COM for it's extensions. Because all COM objects are handled at an OS level, there is much potential for a bad module to blow up the system.

    Of course, even the holes in M$ software have patches available long before they become a headline for the day.
  • by Ulwarth ( 458420 ) on Friday September 21, 2001 @05:00PM (#2331987) Homepage
    You can't force users to stay up to date with security patches or even know anything at all about security. But there are things that OS and distribution maintainers can do to make their software more secure out of the box. I realize that many Linux distributions already do some of this stuff, but I don't think any do all of it. And, it applies to any OS, including those written by Microsoft.

    • By default, don't run any services! Windows 98 is more "secure" than Windows NT because it doesn't run services. A machine that is not explictly set up by the admin to be a server has no business running web, ftp, or ssh access.
    • By default, firewall all incoming and outgoing traffic over the public interface. Leave the ports open on private interfaces (192.168.* and 10.0.0.*) so that they can still share files and printers and things on their LAN without frustration. There's no reason to make firewalling an option. If someone wants to run an external server, they need to explicitly punch a hole in the firewall to the outside world. If they want to turn off the firewall completely, they can do so - but doing so should be difficult enough that they have to know what they are doing to do so.
    • Get rid of telnet and rsh. Install them, maybe, but never have them run by default. Instead, give them ssh as a remote login option. Make sure ssh is properly configured (no root logins, no blank password logins).
    • Encourage users to use blank passwords for desktop use, and then make it possible to login in only from the console when your password is blank. This applies to root, too. Since it's convenient, people will do it - and if it's impossible to log in remotely when a user has a blank password, it's secure, too.
    • Authors of server software have to make security a priority from the begining. All user input should be carefully verified with a single, highly paranoid function that clips length and filters out any characters that are not explicitly needed. Keep careful track of "trusted" versus "untrusted" values in the code, possibly going as far to give them special names like untrusted_buf or trusted_url.
    • Disitributions should GET RID of old, clunky, insecure programs such as sendmail (replace with postfix), wuftp (replace with proftpd), inetd (replace with xinetd), etc.

    Following these steps, I think that distributions will be fairly safe from any discovered server vulnerabilities, and probably most client-side ones, as well.

FORTUNE'S FUN FACTS TO KNOW AND TELL: A giant panda bear is really a member of the racoon family.

Working...