Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

ZDNet Admits Mistakes in Recent SecurityTest 313

drsparkly writes "Linux Today is running this story claiming that the recent ZDNet Linux vs NT security `shootout' was biased against Linux. Apparently ZDNet had neglected to apply 21 available security fixes. They claim that `enterprise businesses would not want to apply 21 individual fixes' and `most large companies would prefer the one large, sweeping-in-scope, fix'. Do they have a point? "
This discussion has been archived. No new comments can be posted.

ZDNet Admits Mistakes in Recent SecurityTest

Comments Filter:
  • by Anonymous Coward
    If you look at the configuration that ZDNet published, you will see that they applied 4 service packs, moved dozens of files around and made over 90 configuration changes. This seems to me to be a rather intensive process.
  • ZD is full of crap. Look at the list of the 21 patches, and compare it to the services they were running on the web server. Now make a list of patches that actually affect the programs running on the server. How many of these services are affected by the patches? Maybe 3 or 4. Duh.
  • by Anonymous Coward
    I recently reinstalled Windows NT on my computer, and I would have been thrilled had I only had to install 21 patches. Lets count what I had to do to make system as stable and secure as possible. Admittedly this is a Windows NT Workstation System and not a Windows NT Server system. (Windows NT Server would have had far more patches.)

    I started by booting off my Windows NT 4 Workstation CD. This put my into the base operating system install. It copied all the files and rebooted.

    After installing the base operating system, I had to apply Service Pack 5 which has some 600 fixes in it. After that I had to upgrade all of my drivers for SCSI card, my NIC, my Sound card, my video capture card, my video card, and my Zip drive.

    After that I had to install IE 4 because the copy of Windows NT I have only comes with IE 2 which cannot be used to download IE 5. After installing IE 4, I installed IE 5. After installing IE 5, I had to goto windowsupdate.microsoft.com and install half a dozen fixes beyond the initial IE 5 installation.

    Then we have the whole virus issue. I had to intall Norton AntiVirus and upgrade that with another 4 or 5 MB download.

    Then, when this is done, there are a total of 17 post Windows NT Service Pack 5 hotfixes that have to be applied. These fix bugs ranging from file system corruption to dialup security.

    As I said, 21 RPM packages would have been far more enjoyable then installing Windows NT.

    Lets not even get into the myriad of patches and upgrades for the applications I have installed. (MS Office, MS Visual Studio, etc, etc)
  • by Anonymous Coward
    Not only can you use ftp/rpm/apt to get security updates, but places like LSL put them all together on nice $1.95 CD's for you...see here [lsl.com]. Granted, they're probably a few updates behind, but the idea is sound. Don't know for sure, but perhaps places like Cheapbytes, the Linux Mall, LinuxCentral et al. have something similar.
  • by Anonymous Coward
    NT service packs, are, I beleive, cumulative. Ie: you only apply the latest one to get all the fixes of all the previous ones... I am not an NT expert, so please correct me if I am wrong.
  • The huge IIS hole which would have left the NT server hacked in 10 seconds is *NOT* included in service pack 5. If they can apply *special* fixes then why not the redhat ones?



    Download the directory, and rpm with wildcards, how hard is that?
  • by Anonymous Coward
    Considering there are close to 400 changes that need to be made to an NT 4 installation 'out of the box' in order to make it secure, somehow the 21 fixes needed for Linux don't seem like they'd be that big of a deal to apply...
  • by Anonymous Coward
    Zdnet should have installed any recommended patch and rpm from the distribution's web site or technical support channel (e.g. RedHat)! However, rpms and updates from other sources (including kernel.org, etc..) should not included. You might as well start fooling around with your IBM mainframe. Enterprise servers should not be a playground for patches and updates. Expecting exterprise customers to follow mailing lists and install every single patch available is unreasonable.
  • by Anonymous Coward

    ...you can download and apply individual fixes if you want - or you can download a "patch cluster" that includes a collection of patches (making it easier to get a lot of patches installed in one hit). Sample clusters generally include:

    • Recommended - as the name suggests, recommended for everyone's use (this is what most people install)
    • Security - security issues only (no other bugs fixed or apps updated)
    • Y2K - duh

    If something like this was available, then just as they'd installed SP5 on NT they would have been able to install the latest patch cluster onto Linux so as to ensure that all the latest patches were included - nice an' easily.

    Even Debian has it over Red Hat in this regard (fire up package management and say "install the latest stuff", which downloads the packages over the Internet and installs them - can't get much simpler).

  • by Anonymous Coward

    Just because it's from MS doesnt mean it's bad...

    What?! EVERYTHING that comes from Microsoft is EEEEEVILLL ! Haven't you been reading your Linux users handbook? Praticality has no place in the computing world. Everything must be as difficult as possible to use in order to keep out the "stupid" people.

    DOWN WITH EASE!

    Yeah, this might be flamebait, but enough of you seem to think this way.

  • i've installed redhat 6 many times and just chose install everything and i dont ever remember seeing a photo cgi script in there. all i've ever seen is cachemgr.cgi in there. Why did they install that program anyways? Did NT have any other 3rd party cgi scripts installed with it? If they did that for no reason, i would say that the test was obviously tainted because it was not just a real install but had other services installed with it.
  • Just look at this [zdnet.com] as an example.

    A more poorly written article about two
    OSes can't be found...

    --------------------------
    Your Favorite OS Sucks.
    ^D

  • not always a service pack... nt5/w2k is gonna have about 10+ hotfixes before it's first service pack... are people not going to install those because it's not just one update?

    I don't think so. They'll do more than one, any good admin will do whatever is nessecarry to secure his servers.
  • but as has been discussed in previous articles, it's not particularly valid. Not applying those patches (whether they come as a single bundle or a multitude) is sheer laziness and a poor excuse. I believe that if the same thing happened to the network I look after at work for the same reasons, I would be (justifiably) fired. If not the first time, then definitly if it happened again (ie, I didn't learn from my mistake).
  • Good idea... kinda like the MS Windows Update website. Using IE, you can connect to this site, which will run some ActiveX program, check which MS software needs updating / patching, and let you choose which ones to update. Once you have made your choice, it goes off and does it's thing and installs all the updates you choose. Actually pretty nifty and painless if you've tried it.
  • sure.. valid points. My point (sorry if I wasn't too clear) is that there's a central place for end users to easily update their files. It's all point and click. Sure, some of it's implementation might be a bit flaky for now.. but I think it's a good idea and a step in the right direction.

    In fact, I believe there have been several pieces of software that does this already.. like Oil Change or something like that. Just because it's from MS doesnt mean it's bad...
  • Because between one month and the next, some cracker could have found a new exploit.
  • The cracker exploited two holes, one in the CGI script, the other having to do with cron. Red Hat had a security update for cron that would have plugged the hole that the cracker exploited.
  • Regardless of whether someone would want to apply 21 security-related fixes, this is not a valid point. The fact is that any even remotely professional system administrator will ensure, especially on a main web server, that any and all applicable security patches are installed. What all this really means is that the people responsible for this "contest" didn't feel like being professional administrators - which, not surprisingly, they aren't. I'm just wondering where they found somebody who was willing to deal with installing the 5 NT service packs. Talk about something I wouldn't want to do. But it's part of the job to keep systems up to date, whatever those systems might be.

    Now, back to practicality: Is it really that hard to do rpm -Uvh *.rpm? I just can't imagine this being difficult in any way whatever. Except for someone wishing to slant the outcome in a particular direction. Anyone who's ever been within 100 meters of a unix system knows better.

  • If you read his page correctly, you would have noticed he used a known exploit in the cron daemon. An exploit that was fixed by one of the RedHat updates.
    Everything below this line is a lie
  • The following is a drop-in replacement for the suexec.c that comes with Apache. It is a bit less tight about permissions (I want to be able to execute code under different UIDs), but executes the CGI within a chrooted environment (so that the UIDs cannot cause harm). Please have a look at the code and tell me what you think about it.


    /*
    * suexec.c -- "Wrapper" support program for suEXEC behaviour for Apache
    *
    ************************************************** *********************
    *
    * NOTE! : DO NOT edit this code!!! Unless you know what you are doing,
    * editing this code might open up your system in unexpected
    * ways to would-be crackers. Every precaution has been taken
    * to make this code as safe as possible; alter it at your own
    * risk.
    *
    ************************************************** *********************
    *
    *
    */

    #include "ap_config.h"
    #include
    #include
    #include

    #include

    #include "suexec.h"
    #undef LOG_EXEC

    /*
    ************************************************** *********************
    * There is no initgroups() in QNX, so I believe this is safe :-)
    * Use cc -osuexec -3 -O -mf -DQNX suexec.c to compile.
    *
    * May 17, 1997.
    * Igor N. Kovalenko -- infoh@mail.wplus.net
    ************************************************** *********************
    */

    #if defined(NEED_INITGROUPS)
    int initgroups(const char *name, gid_t basegid)
    {
    /* QNX and MPE do not appear to support supplementary groups. */
    return 0;
    }
    #endif

    #if defined(PATH_MAX)
    #define AP_MAXPATH PATH_MAX
    #elif defined(MAXPATHLEN)
    #define AP_MAXPATH MAXPATHLEN
    #else
    #define AP_MAXPATH 8192
    #endif

    #define AP_ENVBUF 256

    extern char **environ;
    static FILE *log = NULL;

    char *safe_env_lst[] =
    {
    "AUTH_TYPE",
    "CONTENT_LENGTH",
    "CONTENT_TYPE",
    "DATE_GMT",
    "DATE_LOCAL",
    "DOCUMENT_NAME",
    "DOCUMENT_PATH_INFO",
    "DOCUMENT_ROOT",
    "DOCUMENT_URI",
    "FILEPATH_INFO",
    "GATEWAY_INTERFACE",
    "LAST_MODIFIED",
    "PATH_INFO",
    "PATH_TRANSLATED",
    "QUERY_STRING",
    "QUERY_STRING_UNESCAPED",
    "REMOTE_ADDR",
    "REMOTE_HOST",
    "REMOTE_IDENT",
    "REMOTE_PORT",
    "REMOTE_USER",
    "REDIRECT_QUERY_STRING",
    "REDIRECT_STATUS",
    "REDIRECT_URL",
    "REQUEST_METHOD",
    "REQUEST_URI",
    "SCRIPT_FILENAME",
    "SCRIPT_NAME",
    "SCRIPT_URI",
    "SCRIPT_URL",
    "SERVER_ADMIN",
    "SERVER_NAME",
    "SERVER_ADDR",
    "SERVER_PORT",
    "SERVER_PROTOCOL",
    "SERVER_SOFTWARE",
    "UNIQUE_ID",
    "USER_NAME",
    "TZ",
    NULL
    };


    static void err_output(const char *fmt, va_list ap)
    {
    #ifdef LOG_EXEC
    time_t timevar;
    struct tm *lt;

    if (!log) {
    if ((log = fopen(LOG_EXEC, "a")) == NULL) {
    fprintf(stderr, "failed to open log file\n");
    perror("fopen");
    exit(1);
    }
    }

    time(&timevar);
    lt = localtime(&timevar);

    fprintf(log, "[%d-%.2d-%.2d %.2d:%.2d:%.2d]: ",
    lt->tm_year + 1900, lt->tm_mon + 1, lt->tm_mday,
    lt->tm_hour, lt->tm_min, lt->tm_sec);

    vfprintf(log, fmt, ap);

    fflush(log);
    #endif /* LOG_EXEC */
    return;
    }

    static void log_err(const char *fmt,...)
    {
    #ifdef LOG_EXEC
    va_list ap;

    va_start(ap, fmt);
    err_output(fmt, ap);
    va_end(ap);
    #endif /* LOG_EXEC */
    return;
    }

    static void clean_env(char *cwd,int len)
    {
    char pathbuf[512];
    char stripbuf[1024];
    char **cleanenv;
    char **ep;
    int cidx = 0;
    int idx;


    if ((cleanenv = (char **) calloc(AP_ENVBUF, sizeof(char *))) == NULL) {
    log_err("failed to malloc memory for environment\n");
    exit(120);
    }

    sprintf(pathbuf, "PATH=%s", SAFE_PATH);
    cleanenv[cidx] = strdup(pathbuf);
    cidx++;

    for (ep = environ; *ep && cidx pw_dir);
    p=strstr(newroot,"/.");
    if ( newroot[0]!='/' || p == NULL ) {
    log_err("$home (%s) has no /. for uid= %ld\n",pw->pw_dir,uid);
    exit(102);
    }
    *p=0x00;

    if (getcwd(cwd, AP_MAXPATH) == NULL) {
    log_err("cannot get current working directory\n");
    exit(111);
    }

    uid = pw->pw_uid;
    gid = pw->pw_gid;
    actual_uname = strdup(pw->pw_name);
    target_homedir = strdup(pw->pw_dir);

    /*
    * Log the transaction here to be sure we have an open log
    * before we setuid().
    */
    log_err("uid: (%s/%s) gid: (%s/%s) cmd: %s\n",
    target_uname, actual_uname,
    target_gname, actual_gname,
    cmd);

    /*
    * Error out if attempt is made to execute as root or as
    * a UID less than UID_MIN. Tsk tsk.
    */
    if ((uid == 0) || (uid UID_MIN)) {
    log_err("cannot run as forbidden uid (%d/%s)\n", uid, cmd);
    exit(107);
    }

    /*
    * Error out if attempt is made to execute as root group
    * or as a GID less than GID_MIN. Tsk tsk.
    */
    if ((gid == 0) || (gid GID_MIN)) {
    log_err("cannot run as forbidden gid (%d/%s)\n", gid, cmd);
    exit(108);
    }

    /*
    * Change UID/GID here so that the following tests work over NFS.
    *
    * Initialize the group access list for the target user,
    * and setgid() to the target group. If unsuccessful, error out.
    */
    if (((setgid(gid)) != 0) || (initgroups(actual_uname, gid) != 0)) {
    log_err("failed to setgid (%ld: %s)\n", gid, cmd);
    exit(109);
    }

    /* now we chroot */
    if ( chdir(newroot)!=0 ) {
    log_err("cannot chdir to newroot directory %s\n",newroot);
    exit(112);
    }
    if ( chroot(newroot) != 0 ) {
    log_err("failed to chroot to %s\n",newroot);
    exit(113);
    }

    if ( strlen(cwd) strlen(newroot) ) {
    fprintf(stderr,"chroot not below docroot cwd=%s [%d] newroot=%s [%d] \n!",cwd,strlen(cwd),newroot,strlen(newroot));
    exit(114);
    }

    if ( chdir(cwd+strlen(newroot)) != 0 ) {
    log_err("warning: cannot chdir after chroot %s | %s \n",cwd,newroot);
    }


    /*
    * setuid() to the target user. Error out on fail.
    */
    if ((setuid(uid)) != 0) {
    log_err("failed to setuid (%ld: %s)\n", uid, cmd);
    exit(110);
    }

    clean_env(cwd,strlen(newroot));

    /*
    * Be sure to close the log file so the CGI can't
    * mess with it. If the exec fails, it will be reopened
    * automatically when log_err is called. Note that the log
    * might not actually be open if LOG_EXEC isn't defined.
    * However, the "log" cell isn't ifdef'd so let's be defensive
    * and assume someone might have done something with it
    * outside an ifdef'd LOG_EXEC block.
    */
    if (log != NULL) {
    fclose(log);
    log = NULL;
    }

    /*
    * Execute the command, replacing our image with its own.
    */
    #ifdef NEED_HASHBANG_EMUL
    /* We need the #! emulation when we want to execute scripts */
    {
    extern char **environ;

    ap_execve(cmd, &argv[3], environ);
    }
    #else /*NEED_HASHBANG_EMUL*/
    execv(cmd, &argv[3]);
    #endif /*NEED_HASHBANG_EMUL*/

    /*
    * (I can't help myself...sorry.)
    *
    * Uh oh. Still here. Where's the kaboom? There was supposed to be an
    * EARTH-shattering kaboom!
    *
    * Oh well, log the failure and error out.
    */
    log_err("(%d)%s: exec failed (%s)\n", errno, strerror(errno), cmd);
    exit(255);
    }
  • Such a beast already exists. It's called (drum-roll please...) MandrakeUpdate!

    --Threed
  • by ftc ( 943 )
    I run debian. Slink (stable) on all the production machines, and potato (unstable) on two "testbed" ones.

    I like how i can run "apt-get update; apt-get upgrade" and have the latest security updates I need automatically downloaded, installed and configured on my system.
    Or, if I want to review the changes and decide for each package individually if I want to upgrade them or not, I run the "select" method in dselect first.

    I can even get told within minutes of a new critical patch being posted by subscribing to the debian-announce mailing list.

    There are a couple things that I really like about it:

    1) The advisories sent out to the mailing list contain enough information to know what problem the updates are fixing. The changelog files in the packages (which I *can* read before installing the package, if I unpack it somewhere else) contain a list of all changes. And if this is not enough for me, I can go and get the source package, and diff it to the previous version.

    2) Debian potato will contain the apt-zip package, a set of scripts that simplify the process of downloading updates to removable media (e.g. zip drives, though you could probably also write them to a CD-R if you needed or wanted to). I can apply them to as many machines as I want to by inserting the medium, mounting it and typing "dpkg -i /mountpoint/*.deb"

    3) dselect, console-apt and gnome-apt as well as kpackage are applications that provide me a list (sorted by anything) of Items I have installed so I can check off the one I want to uninstall.

    I think everyone agrees that individual patches would be better since it allows ultimate user control. And the way they are organized in the Debian system is really great.
  • "apt-get update ; apt-get upgrade". I've always got the latest security fixes, and they never render my sytem unstable or completely unusable.

    --
  • Try Debian, esp. Potato, if you really want all the latest updates. Slack doesn't really have a "proper" package management system. Debian is continuing to develop the 'apt' system, which not only provides for package management, but has a mechanism for fetching the latest updates from Debian's package archives. All you have to do is

    apt-get update ; apt-get upgrade

    then answer some questions to get everything updated to the latest (at least, everything that's installed as a package - and Debian has a package for most everything out there).

    If you really need a more stable system, go for Slink (aka Debian v2.1, Potato is being actively developed), but for all the latest updates, go with Potato.
  • What Linux need is some stuff arround RPMs (or DEBs). This will be a way to access a repository of RPMs to automatically download (asking first would be a good idea) any dependencies. This woulld allow one to create a RPM with nothing in it but dependencies. So one install this RPM and all the other RPM refered in it will be downloaded and installed.

    Sounds something like Debian. apt-get is your friend, and an ncurses frontend is being developed as well. (Don't know about the status of the gnome apt frontend tho.)
  • dselect is slowly being tossed away in favor of the new 'apt' (advanced package tool) system that's being developed. And that's just fine by me - I hated dselect. It screwed up my first-ever Debian (slink) install. I was brave though, and went back and used the (still infantile) apt system instead, and it worked MUCH better. apt-get makes updating an install easy as pie, and console-apt is developing nicely (it has some bugs, yes, but it's quite usable even so).
  • Huh? Have you ever applied an NT service pack ? Just click on the .exe, reboot, and that's it.

    If it were only that simple...

    That works fine on your NT workstation that only has user applications installed. If you have SQL server, Site server, SMS, or any other server package that does anything useful you have to install service packs in a special order or risk breaking all sorts of strange dependancies. And it's not just sweeping service packs that need to be installed; most SPs require a myriad of smaller fixes (MDAC etc.) in order to work without bringing things crashing down around you.

    To top it all off, you'll need to make several registry changes, IIS confguration changes (I don't believe there's ANY service pack as of yet that fixes the vurnerabilities in the .HTX script mapping problem in IIS), etc. etc. ad nauseum before your system is safe.

    Bottom line, without spending a decent amount of time and energy on either platform you're not going to have a secure box. I completly agree that your average corporate group would fail to do this under either platform since your average corporate machine is a festering bag of comprimises waiting to happen.

    Why was a third party script installed on the linux box to begin with? It's not like they took advantedge of anything intrinsic to Linux? It was a perl script that just as easily could have lived on the NT box.

  • Of course you know what ZD will say don't you? If NT is so difficult to secure, then why didn't anyone break into it during our test?

    --

  • The Redhat FTP install doesn't install updates, and there is no option to do so. It is reasonably easy set an FTP server so it does so, but it takes a bit of tweaking. See RedHat CD mini-HOWTO [linuxdoc.org]
    --
  • Actually, I have not. My Windows virus scanners seem to think that BO2k is a virus. Even so, I would suspect that the possibilities that B02k open up are not nearly as comprehensive as what a systems administrator could do with root access and Perl.

    How scriptable is BO2k? Chances are it is nowhere near as scriptable as Linux is right out of the box.
  • Actually, most of the time that *nix is deployed as a desktop solution the employee does _not_ have root access to the machine. In other words, you probably have less power on your *nix box than your Windows machine.

    If you were using something like Debian Linux (or any distro with a decent packaging system) it would be pretty trivial to implement something very SMS-like. The administrator could _easily_ see what software was installed on your machine, what hardware you were running (I have seen software that makes very pretty text files of the hardware), who had logged on recently, etc.

    Heck, they could even archive all of the software that you had run, and other such esoterics like what websites you have visited.

    If you have root on someone's desktop Linux box, you _own_ them. This is not necessarily true of Windows machines.
  • service packs are *easy* aren't they?

    so easy, you have to reinstall them if you add a component from the NT cd.

    And so easy when some application (usually MS) takes it upon itself to upgrade files that are also upgraded by a SP. Which version is the correct one? The one from the SP or the one the application installed? And maybe the app can't work with the version from the SP? So how do you install the SP? Or reinstall it if you've changed some vital config, eg changed the NIC?

    And this kind of thing has infinite permutations, leading to hours and hours of NT admin fun. And hey, if it wasn't for NT admin's would never be able to claim overtime! Damm those Unix boxes that just purr away for months and months without a glitch. How can you ever earn money from them?

    Yes service packs... gotta love them. you really do. {God, Allah, prefferred deity} bless NT!

  • Most enterprises want to put on fixes that solve a problem. Sure it is nice to be a patch set on, but ultimately, the idea with patches is to fix/improve/add something. A good sysadmin will always keep current on security patches as a priority even if they have to be applied one at a time. The is most especially true on systems that are attached directly to internet.
  • PostgreSQL database formats changes only between some (usually major) version numbers.

    It would be very bad indeed if RH released an update package that would break data when applied. Haven't seen that yet. They're usually (probably for the reasons you state) very careful about warning you of any implications an update might have. Usually the're no implications except for the fix of the hole.
  • Ever tried updating more than a few machines ? Well that's why you don't want to click it. You could ofcourse. I'm sure that tools like gRPM (GNOME RPM) and others will, in time if not already, give you the option of running the equivalent of the --freshen. You can already point-and-click upgrade packages, which is sufficient for upgrading packages you know needs upgrading.

    I don't know about the 120 exploits you mention. If you look at the updates directory for RH6, there's far from 120 packages to upgrade. So there might be 120 holes, but they're all fixed by applying a far smaller number of upgrades (so either 120 is a little optimistic on someone's part, or some packages just have a lot of holes (I'd doubt that so many packages had so many holes though)).

    Really, redhat has the erratta page, and you can already point'n'click upgrade. In the security updates that redhat release, they even give you an entire command-line you can just cut'n'paste into a root-shell, to have the upgrade retrieved from the 'net and applied. I'm having a hard time seeing what you think the problem is.
  • I can think of:
    *) ssh (using .shosts for root)
    *) rpm --freshen ftp://
    and eventually
    *) at

    There you have your shink-wrapped enterprise management patch package distribution scheduling parallel system [feel free to add more buzzwords]

    Really, with rpm (and I'm sure with dpkg too) it's really _so_ easy. You need a very small amount of imagination, and then you have your management system that you can customize in anyway you please (hell, you just wrote the main routine of the application yourself - even though it's a one-liner).

  • If they had bothered to maximize security, we could have found a NEW flaw, thereby actually accomplishing something.
  • This is because I really am worried still that microsoft is databasing stuff about me and what is on my server. I would rather just "one way" FTP the stuff I need then install it on my machine much like the Redhat errata site is right now.

    At work I have to use NT on my desktop. I ran the task manager and decided to try and kill some tasks. Heh, I could not kill the smss.exe. Gee, I wonder what that is for?

    I bet many companies fear the penguin because they will lose the ability to snoop on you, much they way WinozeUpdate probably does as well.

    Ken
  • This is looking less and less like a test and more and more like an ambush. Still, I like to keep in mind the much repeated advice "never ascribe to malice what can be adequately explained by incompetence".

    I found the page you linked to very informative. I had no idea security-conscious NT admins worked so hard.
    --

  • I'm too lazy to read the article. I did read the hacker's (yes, he is a hacker) how-I-did-it piece and it didn't tell me *why* that CGI script was there. What was it doing there? Why was it installed?
    --
  • Yes it should matter - I've been around PROFESSIONAL Sys Admins in a mixed Sun/Windows shop. These guys applied EVERY fix to the Sun OS as they came out, or as they installed new system. PERIOD! These guys set up the community - AGAIN - and they used an application which indeed had holes in it. Probably much to the suprise of the guys that wrote the religious (holey) software. The OS should have been the final line of defense with no known ways of gaining root privilege. The second part of this proposition was the sys-admin's responsibility. It just isn't THAT hard!
  • Autorpm has been around for a while, which can also check for updates and install them. The major differences with RH's (that I know of, I don't have it yet) are:

    * A priority FTP server for registered users
    * It comes with RH standard. Not everyone knows about autorpm.
  • I work on one of the largest Unix sites in my country - 200 machines - and I can tell you, if I had to apply 21 individual patches to 200 machines, I would be ready to punch someone.

    These machines are mission critical which means that the only time you are allowed to apply patches is outside business hours which for these boxes is between 9pm and 3am. That's a lot of late nights. Sure a single large patch still has to install the same amount, but you could start patching the system and then move onto another one which means you could do several in parallel. With individual patches, you would have to keep coming back to each system to start the next patch.

    On top of this, due to the mission critical nature of the boxes (they are used nation wide), we have extensive change management controls. Any patch that we apply would have to have a corresponding backout procedure. It is much easier to consider a patch as one big patch than 21 individual patches. Sure, us tech people know that they are really one and the same. But try telling the change managment people that.

    When you are dealing with a small site, individual patches are probably preferable - I would prefer them myself.

    But on an enterprise level of any decent size, there is no way I want to have to deal with individual patches.

    This is not intended as an insult to those who are contributing to this topic, but how many of you guys actually work in the enterprise area? Or are the majority of you making comments based on what you think happens in the enterprise arena?

  • Yes, I totally agree with the previous replies - I too would use scripts.

    In my haste to post my reply I overlooked the mot obvious way to handle multiple patches - yes, I look stupid.

    I should have known better because I just performed 5 patches to the machines two weeks ago - hence my post on this topic - and yes, I used scripts then.

    I do stand by my argument on red tape though.

  • Heh heh, regardless of whether you have one or multiple patches, applying them to WinNT is painful! :)

    I too have been in that situation. The GUI nirvana kinda falls down doesn't it when you have to push buttons a-l-l t-h-e t-i-m-e!! I synpathise with you.

    My post was more about comparing a single patch for Unix to multiple patches for unix.

  • If you think that "good" system administration is:

    1. Download a great big service pack
    2. Don't bother to read any release notes or even to think why you might be applying the fixes
    3. Install and reboot your server
    4. Place a Big Fat Tick (Check?) mark by the "I've installed the Service Pack on this server" item on your job sheet
    5. Repeat for another n servers

    Then they might have a point.

    If you think that good system administration involves: Understanding your system; Understanding the problem; Understanding the solution, then of course you don't want to blindly install hundreds of megabytes of new code...

    It really is a question of mindset. Given a handful of servers it is far easier to do

    ftp some site
    cd update directory
    mget *.rpm
    quit
    rpm -Uvh *.rpm

    And then telnet to another server and repeat the same. Without rebooting your machine.

    [That's if you really wanted to of course, and weren't that bothered in working out what the impact of each RPM is].

  • Mmm.. Your missing something here when you say "..instead of having to track down 21 security fixes from different sources, which in some cases might require recompiling. That requires more work and a knowledgable systems administrator.

    ZD were testing RedHat Linux. This is a distribution. This means that it is put together by (the evidence suggests) some knowledgable people. So you DO have one trusted source, and one set of files. This is why it is worth paying RedHat for their distriubtion - because it relieves your of the burden (but not the responsibility) for continually monitoring and updating your system.

    It is far, far, far easier to maintain a few RH systems (especially remotely) than it is the same number of NT servers.

  • One of the strengths of UNIX is the availablity of various scripting languages/facilites which can be used to automate things - a System Administrator would probably not type in the RPM command on all 2000 machines, but would automate it.

    Some keywords to help you: "perl", "cron", "bash", "at", "expect", "init".

    Try searching for these either using the web or the "man" command.

  • If your source is correct, then this was a sysadmin test NOT a security test. If it were a security test the patches would have been applied.

    As to the "real world" conditions this is BS. If they want to test real world conditions, get a statisically significant sample of sys. admins, give them all the same hardware and software and see how many boxes are secure in two weeks.

    Either the people who ran these tests had a preconceived result or they are complete idiots (or both).

  • I have, yet this is currently experimental. Also it is small enough to be distributed as a single piece, enabling you to read it without tools.

    Something like this would have been able to contain the ZDnet script in a tight environment, probably making the exploit much harder.
  • This is where the centralised method of distribution that FreeBSD et al use really wins. You just set up CVSup to run regularly and run "make world" when you need to actually install the patches. Strictly a hands off operation.
  • Just click on the .exe, reboot, and that's it.

    Run dselect, select install, don't bother to reboot. Or, download all of the rpms, and run rpm over all of them at once. OR, download the latest service pack, decide if you prefer a security hole in file shareing, or a broken print service and who knows what else.

  • On top of this, due to the mission critical nature of the boxes (they are used nation wide), we have extensive change management controls. Any patch that we apply would have to have a corresponding backout procedure. It is much easier to consider a patch as one big patch than 21 individual patches. Sure, us tech people know that they are really one and the same. But try telling the change managment people that.
    I don't know how you do things in your neck of the woods, but to change management at my company, a new installation would be considered 1 change. i.e., New webserver with all errata packages applied. Now, during the production, you tend to get them in more manageable chunks - usually 1 at a time.

    Speaking of enterprise environments, though, I think it would be unfair to leave out Solaris 7. It has 22 security-related patches as listed here: ftp://sunsolve6.Sun.COM/pub/patches/Solaris7.Patch Report [sun.com] Do you run Solaris at your site? If so, did you install all of those? Here, we've got scripts that install those patches on the Solaris boxes. Of course, change management is involved, too.

    Sure, it would be nice if Red Hat paid more attention to security and quality control, but that's why I tend to stick with Debian & FreeBSD when feasible. :)

  • "enterprise businesses would not want to apply 21 individual fixes"

    The usual "manager vs. IT dude" problem, I suppose:

    The average enterprise manager could probably easily be persuaded to order their IT guys not use Linux for that reason. They always scare easily for things that are not their area of competence.

    If the IT guy take the OS decision himself, it probably doesn't matter whether it is one fix or many. If he already selected Linux, then he probably also like the power and control it gives him.
  • No serious enterprise company should allow any automated tool to install any software without human intervention. While I am not aquainted with the security precautions in autorpm, if any, placing an amount of trust in a network-provided resource is the sort of error that gets system administrators fired for incompetance.
    That aside, I prefer having several small updates, which allows me a finer granularity of which patches I install. Take for example a Sun patch cluster. Each patch is a in a subdirectory all its own, and the order in which they are to be installed is listed in a single text file. While the current recommended patches are available as a single tarfile, there is a fine level of control available.
  • one 'trusted' source instead of having to track down 21 security fixes from different sources, which in some cases might require recompiling.

    Sigh. You don't have to track down security fixes from different sources, and you don't have to recompile anything. Just go to Red Hat's updates page, download everything and do rpm -Uv *.rpm

  • I guess the main reason why GNU/Linux systems ship numerous small updates, whereas NT has huge single service-packs is, that any normal program (a package) under GNU/Linux consists of a well-defined set of files. None of which are *system* libraries (DLLs).

    On Windows a typical application ships its own version of some of the *system* DLLs, thereby rendering the whole platform insecure if one of it's libraries has a flaw.

    Thus the need for a huge service pack on NT. You need to re-ship updated versions of all libraries, and you need to re-install the service pack after each installation of a (seemingly unrelated) program, because NT DLLs are touched by *applications*.

    Because of open source, we can re-compile an application that doesn't work with the system libraries we may have, thereby avoiding having to overwrite system libraries whenever we install an application. Therefore we can have small packages that update nothing but the problem. And therefore GNU/Linux will, unlike some other OS, have a massive share of the total server installations for many years to come.
  • That is, if rpm --freshen * is too hard to type, they shouldn't be running computers at all.

    Hire someone with a clue, and go back to writing articles.

    Seriously though, if you tried applying NT service packs, and tried rpm --freshen, you know who's got the lead (and for those who haven't tried, here's a hint: it's not the redmond guys).

    With NT, you apply one huge service-pack that (somewhat) fixes the problems known at the time of the release of the service pack. Whenever you install a new piece of software, you have to re-install the service pack if you want to be sure it's effective.

    With rpm you do the --freshen trick, once. If you install another piece of software, well fine, no worries. If another fix becomes available, just get them all and do --freshen, or get the one fix and --freshen. It's as simple as it gets.

    I think it's much too common for clueless people to assume that it's hard to maintain a system they don't know (and haven't even tried to grasp), and assuming that the system with the most aggressive PR backing is necessarily much easier.

    The only reason why we don't see more remote attacks on NT is because ``networking'' is somewhat alien to NT. Networking has always been an integral part of UN*X and Linux, so naturally a buggy networked application is almost bound to compromise the system in a cracker-friendly way.

    Consider the incredible amount of local attacks on NT being posted weekly (almost daily) on Bugtraq, and you see why NT people should be really happy that NT is not a network operating system.
  • True, most manager-people would probably prefer a single huge package that fixed everything. It'd be a misnomer, since huge fixes never fix everything (any more than lots of little fixes do, though those can achieve better granularity). But the people who do the work wouldn't need to apply 21 different updates, unless they were running all of the packages needing upgrades -- that's part of why upgrades published on a per-package basis works out well -- if you need to upgrade crond, the fix is about 500k and just fixes cron. If all you're running is crond, then 500k later it's fixed. A typical MS Service Pack is huge, and contains a ton of things which may or may not have needed replacing. Moreover, because the MS service packs are so wide-ranging, they require a greater quantity of more difficult testing to validate that it works. With an apache update, say, you know what to test.

    However, in deference to the long expertise of corporate IT managers, I hereby propose the following Industry Standard for Manageable Updates. Call it the RedHat Service Pack specification. I expect to see it hailed as a wonder of technological innovation and a great leap forward for the Linux communiy in providing security management:

    Packaging (this part is proprietary, you don't need to even see it. avert your eyes):

    ls *rpm | sed 's/^/rpm -Uvh /' > UPDATE.sh

    tar cvf RH-SP3.TAR UPDATE.sh *.rpm

    Installing:

    #!/bin/sh

    # install_servicepack.sh
    mkdir /tmp/sp-$0-$$
    tar -C /tmp/sp-$0-$$ $0
    cd /tmp/sp-$0-$$
    [ -x UPDATE.sh ] && ./UPDATE.sh

    I expect news of this great manageability innovation to be trumpeted throughout the tech news industry. It should be referenced in the sales pages for Maximum RPM, but may require a separate publication of its own to explain this great technology to the world, especially the technology press.

  • Yes, there is -- they're calling it the RedHat Update Agent, and its main job seems to be to perform RPM upgrades automatically as they become available. It's hardly new, and if ZDnet had done any research (they read the HOWTOs, and Apache's security docs, and ignored the rest), they might have found it. AutoRPM [kaybee.org] has been in common usage for quite a while now -- it's a nifty little app that picks up the updates from FTP, NFS mount, etc., checks the PGP signatures, and installs the upgrades, then notifies you that it happened so you can check its work. Closes the vulnerability gap a bit.
  • Agreed -- recall ZDNet's stated rationale (or rationalization) for not installing any of the updates: "The hackpcweek.com test was not meant to be easy but was meant to be practical and to reflect the habits of corporate IT."

    Which presumably doesn't mean that they believe corporate IT to be a bunch of ignorant layabouts, but if I were a corporate IT person, and a reader of their publication, and also in the slightest bit competent with Linux, I'd be insulted. Perhaps they don't grasp the significance of a discrete package upgrade -- something MS has never really gone for. Root compromise hole in crond? Well, upgrade crond -- redhat publishes the bloody rpm -Uvh ... command to do that in every security advisory. It's a different methodology -- we usually have one upgrade package per main package -- and that, in the UNIX scheme of things, makes vastly more sense than clobbering all our package management systems (far superior to that offerred by poor NT) in favor of what they call "[making] fixes available in a more manageable manner."

    ZD didn't do enough research while orchestrating this PR stunt, I suspect. Bring on the derision. ):

  • In principle, this sounds like a good thing. In practice, enabling Windows Update opens a big security hole:

    ActiveX controls can be marked "safe for scripting," meaning that a script on any HTML page can activate them without requesting permission or giving notification. And the controls turn out to have holes. So far, Microsoft has identified two buffer overruns and one case of improper filesystem access among Microsoft-supplied, marked-safe controls (Security Bulletins MS0099-33, 37, and 40).

    ...

    For now, Microsoft recommends turning off ActiveScripting. Unfortunately, that breaks a good many Web sites, including most of Microsoft's. A less draconian solution suggested to me by a Microsoft developer is to deny permission to run "safe for scripting" controls. But even this breaks a lot of sites, including Windows Update, which is most Windows 98 users' best hope of installing security patches.

    (from a mail to the RISKS mailing list by Steve Wildstrom ).

    Debian's system doesn't rely on this sort of stuff - you have to actively ask for packages. However, it still relies on your trusting the FTP server you get them from. Official packages will be signed - but do you know that all Debian developers with the key will keep it safe?

  • OK, let's just see how difficult Linux's 21 separate updates are to install - (assuming you're stupid enough to want to wait for 21 updates to accumulate):

    $ rpm -Uvh ftp://ftp.mydistribution.com/pub/updates/*.rpm

    Now that was such a lot of work wasn't it?

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • Because of open source, we can re-compile an application that doesn't work with the system libraries we may have, thereby avoiding having to overwrite system libraries whenever we install an application. Therefore we can have small packages that update nothing but the problem.

    Agreed, this is key. Perhaps even more important though is the ability to statically link, so that binary releases can be built, a la Netscape, with everything version-independent (except for kernel dependencies which are few & far between thanks to the efforts of people like Torvalds and Cox). So you can download the binary app and expect to have it work, as it nearly always does when built this way [ed note: and when declared stable ;-)].

    Another factor of crucial importance is for this linking process to be carried out by anyone who wants to do it, i.e., access to the source code is important just as you say, but not necessarily for the same reason. Also consider - it's possible to re-link a dynamicly linked app to become a statically linked app using a linkage editor... I don't know if Linux has such utilities because I'm a relative newcomer to these development tools. But if they're not they're, we need them badly.

    And therefore GNU/Linux will, unlike some other OS, have a massive share of the total server installations for many years to come.

    (a) That and 1,000,000 other reasons

    (b) It already does. (Check the situation as of last spring [leb.net])
  • I'm too lazy to read the article. I did read the hacker's (yes, he is a hacker) how-I-did-it piece and it didn't tell me *why* that CGI script was there. What was it doing there? Why was it installed?

    In order to simulate a real web server, PC Week Labs had to have it exist for a reason. So they installed a Classified Ads application. And it had a hole.

    If you read the page [hackpcweek.com] where they described the configuration changes they made, you'll see that they made more changes to NT then they comparatively made to Linux. As in, it was biased a lot more then just not installing all the patches on Linux. They made registry changes. *By* hand, I presume. They moved some of the admin tools to a different location on NT, but didn't move the comparative tools on Linux.

    They were comparing apples to oranges anyways. They used a CGI application on Linux and a scripted application (ASP) on NT. Come on, to be fair they should have used a scripted application on Linux also. They *know* what php is, they used it for the forums [hackpcweek.com]

    -Brent
    --
  • But as it turns out there is a way to get all-inclusive patches for Linux. Install a new release. They come out every few months, much more frequently than Microsoft service packs, and generally include all previous patches. The upgrade process is fairly similar in difficulty to applying an NT service pack. Interestingly this isn't mentioned.

    Six months for Red Hat to be specific. Probably a lot faster then MS releases service packs. That's basically what RH 6.1 is, a service pack in MS terms for 6.0. There is only one difference. Red Hat replaces their old version with the new version. If I buy a copy of NT today, would I still have to install SP5? I imagine so.

    Still though, I wouldn't want to have to wait until the next version was released to fix security holes. Not even on NT.

    -Brent
    --
  • > Do they have a point?

    No.
    Imagine you buy 21 different programs from 21 different vendors, but you buy them all in the same shop, with one single bill, maybe bundled in a single box.
    It's obvious that each vendor will fix only their own part and you'll get 21 different fixes.
    What you can expect from the shop is that they bundle the fixes in the same way they bundled the programs.
    And this is what Linux distributions already do (Debian at least).

    Cheers!
  • The difficulty of applying 21 security fixes may be a bit of an issue (not that I find anything difficult about "rpm -Uvh *.rpm"), but that sure as hell doesn't justify ZD's decision not to apply the fixes. Applying the vendor's fixes is not optional, no matter what system you're running.

    Do they think that if a business had its several-thousand-user network were compromised, the execs would accept the excuse that there were just too many vendor-supplied patches to apply?!

    --
  • I think what they did may be a good way to test the ease of securing a server, as opposed to the true security of the server, once everything has been properly secured. So, perhaps had they applied all of the necessary patches to linux, they would have shown that linux is more secure, and that maybe companies with security issues need to hire people smart enough to be able to secure their information, rather than people who can install NT (boy that's tough) and run a few executables to apply some service packs and be done with it, without really having taken any steps to secure the box.

  • It's not the managers who are going to be doing the work, they're simply going to mandate "This will be secure!", if they know enough to mandate anything at all.

    Most admins out there may not like doing multiple patches, but there are advantages. Some patches can open other holes, and using one of NT's service packs isn't guaranteed to fix everything either. And having them separated out allows an admin to more closely monitor what's been patched, rather than than NT's way of doing things.

    It's like the NT vs. *nix discussion itself: each has its pros and cons. What it all boils down to is the competency of the guy/gal running the box.
  • > if I had to apply 21 individual patches to 200 machines, I would be ready to punch someone.

    Just copy them to an upgrade directory, cd, and type rpm -Uhv *.rpm on each system. How does that compare to installing one NT service pack on each of those same 200 systems?


    > the only time you are allowed to apply patches is outside business hours which for these boxes is between 9pm and 3am. That's a lot of late nights.

    Per above, except have a cron job run at 9pm every night to -Uhv whatever files you put there during the day.


    Any patch that we apply would have to have a corresponding backout procedure

    Just re-install --force your prior version of the RPM for the same package.

    Would you rather back out (say) one of 21 RPMs with rpm --force, or back out an NT service patch? And even if they were the same amount of trouble, do you want to throw out everything the SP offers, just because one of the patches on it sucks? Some of the other patches in the SP might accidentally fix something without breaking something else.

    ZD doesn't have a case. Because they don't have a clue.


    --
    It's October 6th. Where's W2K? Over the horizon again, eh?
  • > A security update from a vendor should be applied IMMEDIATELY

    Are you saying you don't like the MS timeline?

    Media reports the hole.

    MS Months 1-3 : Deny that the problem exists.

    Media reports an exploit of the hole.

    MS Months 4-6 : Admit that there is a problem that can be exploited by people with esoteric knowledge (who wouldn't consider doing such a thing!) under rare conditions, but that isn't a problem for ordinary users.

    Media reports a high-profile exploit of the hole.

    MS Months 6-9 : "We're working on it."

    Patch is delivered.

    Your Months 10-12 : Sysadmins either wait to see what happens to the suckers that apply it first, or else spend these months trying to repair the damage and lock out the new holes created by the 'patch'.

    Media reports the problems caused by the 'patch'.

    MS Months 13-15 : Deny the problem exists...

    Repeat until bankrupt. Season the above liberally with vaporware announcements about how the next new product is going to make all your troubles go away.

    Meanwhile, who's been reading your mail?

    --
    It's October 6th. Where's W2K? Over the horizon again, eh?
  • Call this flaimbait, hidden linux worship, sour grapes or whatever...

    But ZDNet (and Yahoo) lost much credibility with me when they couldn't figure out that Jesux was a joke.

    My .02
    Quux26
  • This wasn't even a remotely valid security test, so who the heck cares about the details?

    There's no way am I going to make a decision based on what happened in a test like this. I'm not even going to take it into consideration. It was entertaining, and I enjoyed it, I enjoyed reading about it, I hope the ZDNet people had fun doing it, and I hope the people who hacked it had some jollies.

    But the results are as meaningless as Bill Clinton's sworn testimony.
  • Yes, they would have. They probably would have prevented jfs from getting root. If he did manage to get root then he would have uncovered a new security hole. Unfortunately, due to ZD incompetence, we have learned absolutely nothing from this little exercise (except possibly the magnitude of ZD's stupidity).
  • This wasn't even a remotely valid security test, so who the heck cares about the details?

    The people who don't know that it is an invalid security test cares about the details.

    Time and again, some magazine, company, or other shows NT's supposed improvements over Linux. Then somebody notices how the "test" was intentionally or unintentionally rigged. While this is great for the Slashdot community, this is the sort of stuff that needs to be seen by those who make the buy decisions.

    Now that you know, you can argue this where you work or learn; when somebody points to this test as a reason to install NT at your site, you have an effective counterargument--and URLs to back it up.

  • There's a utility out there somewhere called AutoRPM which does this and more, making the process of keeping your system updated completely transparent and automated.

    I do agree that the *BSD way is a very good one, though.

    ---

  • Just look up any `hacked page' archive which keeps track of the OS for the original website, and start counting. Keep in mind that Microsoft operating systems are actually less popular as a webserver platform than Linux, and Apache is far more popular than any MS offering (see The Internet Operating System Counter [leb.net] and netcraft [netcraft.com]). To make it easy on you, I did a count on some of the recent attrition archives [attrition.org] and came up with these results (I only listed Linux, NT, Solaris, FreeBSD and OpenBSD, so the totals will NOT match the sum of the individual OS's):

    year=month total Linux NT Solaris FreeBSD OpenBSD

    1999-10 53 4 29 14 8 0
    1999-09 259 72 82 62 12 0
    1999-08 318 68 106 77 9 0
    --- --- --- --- --- ---
    total: 630 144 217 153 29 0

    (apologies for the funky formatting, it used to be a nice table but /. does not like tables, and does not support the tag...)

    According to this logic, Linux is cleary more secure than Windows NT, especially when you `weigh' the numbers with the popularity (or lack thereof) for the individual operating systems.
    Of course, the really interesting number is the 0 for OpenBSD. Pity though I have no idea how many OpenBSD sites there are out there...
  • Well I don't know about enterprise settings but:

    I worked for my college's computer services this summer; my job mainly consisted of applying patches to NT for 3 months. Admittedly, we have many more computers than you (I'd estimate 800+ or so in public labs and administrative offices, we are extremely wired for 1500 students) but with 5 other students and the college's professional staff we were unable to apply service packs to all of them. Why? because when installing that "one big easy install" not only do you have to kick the user of the machine off (they really don't like that) but you actually have to be there the whole time to click on those "friendly" buttons. NT's profiles (they are like home directories except they suck) aren't always updated correctly by the upgrade so the users have to fix and reinstall their programs. Computers that were running NT SP3 w/o IE4 a little bit slow now are completely unusable with all of the "improvements" that were "necessary". Not to mention differing support of hardware between the different service packs; SP4 broke some computer I worked on because of incompatibilities with the BIOS on some Compaqs which had no problems at all with earlier versions.

    In contrast, if we had been using Linux, even if I hadn't created a script, I could have opened up a sh*tload of telnet sessions from the cold room and, without the user knowing or caring, updated each and every machine at the same time with only the packages necessary.
  • 1. NT itself is a piece of crap to even maintain properly. SP2 and SP4 only proved that Microsoft does not properly test third-party products with their Service Packs. We waited until SP5, and ONLY after several rounds of serious tests to make sure that nothing got hammered.

    1a. Certain clients that used third-party messaging, web server, or application server products made by competitors such as Sun or Netscape had serious issues when SP4 was installed. So did Samba in one of our test cases. Leads me to believe that M$ wanted SP4 to push the M$ products over the competing products.

    2. The Install of NT itself on a bare box is abyssmal. It takes about 10 reboots to get everything installed right with the Hot Fixes and the Service Packs. Linux takes one with 6.1. By the way, the install is about 5x as fast as W2K even in graphical install mode of RH6.0.

    2a. Plus, there's the monitoring of NTBUGTRAQ for the latest exploits. Sometimes they hit 5 a week. The MS people post fixes 2 weeks later.

    3. Linux, on the other hand, is mostly stable. Fixes are out within hours. I don't have these issues.

    4. Linux isn't tightly integrated with Apache.
    If I want to change web servers for reasons of security or such then I can. Can I do that easily with NT? The answer is no, unless you run Apache for NT. Then you still have the issues of the operating system.

    4a. IIS is the biggest security hole of a web server I have yet seen. The bugfixes hardly fix anything. Doubt me and think NT is god? Read NTBUGTRAQ or actually run an NT server connected to the Internet. Microsoft and their COM objects are causing a whole mess of havoc.

    5. Security hole in a Perl script on the hackpcweek site? I wonder why nobody tried to do the same with COM objects or the numerous buffer overflows on NT? Better yet, let's see how long it takes Redmond to come out with a fix! IF anyone wanted to not follow the rules of that contest, I am sure something like that would easily take down the box.

    6. I hear too much from NT admins about "Wait until Windows 2000". Y'all can shut up about your vaporware. I interviewed two admins. One was a W2K freak. The other mentioned that MS should fix their products before releasing new ones. Guess which one got the offer? Shut up about how great MS is until I see stable shipping product or get out. Linux is right here, right now, and is constantly being updated. It's also open source and audited by thousands. Beat that, Redmond. Giving a closed source preview of a product doesn't make it like Linux. Open the source and show those API's like WNetEnumCachedPasswords.

    6a. I have seen portions of that code, and it is MESSY. They probably won't release it out of embarrassment. I wouldn't.

    7. ZD is advertising-driven. Guess who buys most of their advertising? Microsoft. Do you HONESTLY think ZD is going to bite the hand that feeds them? I think not. They are Microsoft's bitch. Anyone who reads anything from ZD should realize that. It's a PHB magazine, meant for people who choose not to pay attention to what is going on in IT. Until Red Hat, VA, Sun, SGI, and other non-MS companies advertise, then they will be continue to be the puppets of Redmond.

    Until next time....

  • I've worked for quite a few IT managers in some rather large shops (1000+ servers). Not once have I had one not willing for us to install any number of patches, just so long as they have been tested in a test environment. I have to wonder where ZD is getting the idea that enterprise businesses CARE how many patches are being installed (or what OS they're running, for that matter). Most companies simply ant a stable platform to run their applications on.
  • by wct ( 45593 )
    I think the main complaint is an absence of parity between the two platforms. On one hand, NT had the five service packs applied, which are IMHO fraught with more difficulties to install than rpm'ing 21 patches. MS's service packs are renown for breaking other things from previous packs, and are usually released a long time after the bugs they fix are identified.

    I really wouldn't have a problem with this at all, if ZDNet hadn't made the blanket conclusion that NT was easier to secure. That's an overwhelmingly ignorant statement to make.

  • Simplistically speaking that's right, but practically speaking it's not.

    Before applying SPs I wait at least a few weeks to see what people report as breaking under the new SP. There's usually something, and all too frequently (two NT4 SPs out of five!) applying an SP has a detrimental impact on system stability.

    On top of that you may have to reapply SPs after installing new packages (particularly those from Microsoft) and you want to create a new emergency repair disk. These things are not necessary under Linux.

    IMO, having adminstered both systems (and a bunch of others) for years, I much prefer the small patch approach where I can pick what I want to apply according to my needs: e.g. if I'm not running ftp I don't really need to apply an ftp patch.

    But as it turns out there is a way to get all-inclusive patches for Linux. Install a new release. They come out every few months, much more frequently than Microsoft service packs, and generally include all previous patches. The upgrade process is fairly similar in difficulty to applying an NT service pack. Interestingly this isn't mentioned.

    Interestingly, ZD says "Imagine the work involved in integrating 21 separate fixes into a change process to be deployed across an enterprise." Actually that doesn't have to be a lot of work. You can set up a master system and use rdist to propagate patched software to everything all at once. This kind of environment is easy to set up (the software is stock) and allows the software to do the grunt work of upgrading systems. You need to buy extra software to do this kind of mass upgrade on NT.

  • "Of course new users are still left to install all 21."

    I'm not arguing that small, isolated patches are infinitely superior to mega-packs including both fixes and features.

    However if a company like RedHat wants to provide support that people would buy, then making a patch or script available to fix all known security problems since last release might be a worthwhile product that new users would appreciate, especially those switching from Windows.

    If you want to get into ease of use features, something with the functionality of Windows Update could also be popular. It should be done Unix style though. The update site sends the information about what is available to the local computer on request, which then compares it to what is installed and offers the user an opportunity to select packages to update or install. From this a script is generated locally that will download and install the required software. Category filters for "Security", "Bug", and "Feature" would also be nice.

    Perhaps their new online update support in 6.1 addresses this. Can anyone describe it for me?
  • by Hynman ( 67328 )
    Not after the Red Hat updater dingus in RH 6.1!!!
    As I inderstand it it's automatic? is this correct? I have not had a chance to check it out.
  • ncftp updates.redhat.com/pub/updates/6.1/RPMS/
    > bin
    > get *rpm
    > bye
    rpm -Uvh *rpm

    Now really how hard is that? This "enterprise" crap is making me sick. These enterprises are hiring people who have peanuts for brains? They would much rather go to Microsoft's website, find the latest patch, download it, sit through the update, reboot the computer AND do the update and reboot process again after they install a new application (This is recommended by most all NT service patches). How many steps is that?

    Anybody who can use ftp will tell you that it will take less time and effort to update the Linux machine. Now the "ENTERPRISE" IT guys, they just have a small problem.

    They have never heard of ftp.

    But they are perfectly capable of maintaining the company mainframe. A a whole lot of them work at Ebay and ZDnet also.

  • by Gleef ( 86 ) on Tuesday October 05, 1999 @03:45AM (#1638932) Homepage
    The Debian distribution has set up to do pretty much exactly what you're asking for for a long time now (right down to the distribution of ISO 9660 images for offline machines). In addition, the updates and fixes are better tested and more independant from each other than the corresponding ones in Windows, resulting in a more stable overall environment. It refrains from adding the security holes that Windows Update gives.

    Personally, I prefer RedHat, because it gives me more individual control, but Debian sounds like it would be far better for you, and get you away from the nasty broken Service Packs.

    ----
  • by Greg Hewgill ( 618 ) <greg@hewgill.com> on Tuesday October 05, 1999 @03:26AM (#1638933) Homepage
    I'm surprised nobody has mentioned FreeBSD and its cvsup system. After mucking around with Linux for a couple of years and never really getting comfortable with maintaining a system with RPMs etc, I disovered FreeBSD not too long ago.

    I now have a completely up to date 3.3-STABLE FreeBSD installation on my trusty old P90 that used to run a crufty old RedHat 4.2 install. By watching the FreeBSD mailing lists, I can tell if there's something new I need. If so...

    cvsup stable-supfile
    make world [1]
    make install
    make kernel
    mergemaster
    reboot

    Presto! Completely up to date system. Why isn't it this easy with anything else? Why are binary distributions/updates/patches/etc so popular?

    [1] Okay, this step takes seven hours on a P90.

  • by Tack ( 4642 ) on Monday October 04, 1999 @08:48PM (#1638934) Homepage
    I maintain that it is better to install isolated patches as opposed to one huge monolithic upgrade (as in service packs).

    I don't mind upgrading an FTP or bind (or whatever) RPM on my servers, but I absolutely will not install an NT service pack on a production server until waiting at least a month to see what kind of problems arise. I made the horrible mistake of installing SP4 on one of our NT servers. Never again.

    Jason.
  • by Cally ( 10873 ) on Monday October 04, 1999 @08:55PM (#1638935) Homepage
    I've just rebuilt my NT Workstation, this time I decided to get really anal about security -- auditing everything, applying all available patches, hotfixes etc. Microsoft release 'Service Packs' that aggregate all the available fixes and patches; NT 4.0 is now on SP5. However after installing that there are merely the ... twenty ? thirty ? other patches and fixes to apply to NT alone. There are multiple patches for Office and Internet Explorer, too, and the holes they're patching are mostly things that could leave root (Adminmistrator) access vulnerable. There were 13 NT security alerts & patches in September /alone/.

    So "most large companies would prefer the one large, sweeping-in-scope, fix" huh ? Quite right. Our corporate MIS has banned the application of hot fixes, patches or service packs beyond SP3 because ... wait for it ... it makes NT too unstable .

  • by Quikah ( 14419 ) on Monday October 04, 1999 @09:47PM (#1638936)
    Why are they complaining about having to install 21 patches? They needed to install 4 with the config they were using; cron, kernel, net-tools, and dev updates. None of the other services were installed thus they did not need updating. Maybe update X if they actually installed it and libtermcap (this is a fix for a local exploit, but better safe than sorry). So maximum of 6 updates.

    On NT they installed SP5, IE 4.01, option pack 4 and SQL server SP1. That is 4 updates.

    gee, strikingly similar...
  • The Red Hat fixes would have limited the scope of the intrusion, but the bottom line is that the guy got a shell at all because the 3rd-party CGI was buggy. This will be a problem if you're using NT or Linux or True64.

    I'm torn on these kinds of tests. On the one hand, the test is attempting to prove the security of an operating system distribution, so that's really all that should be running. On the other hand, you are going to want to do something with that machine. Certainly a stand-alone Linux box with nothing else on it is not much of a real-world test.

    In the end we're just serving to prove an old truism of security: You put a firewall in to keep out the 13-year-olds, but to stop the determinied crackers who are targeting your site in particular, you need to audit every piece of source you run. A very tall order, and always painful. It comes down to risk analysis and trade-offs.
  • by ckm ( 87462 ) on Monday October 04, 1999 @09:46PM (#1638938) Homepage
    [QUOTE]

    All I have to say about
    http://www.zdnet.com/pcweek/stories/news/0,4153, 2346293,00.html
    is that you all are idiots.

    I rarely write about things, but this is an outrage. Anyone who thinks that
    MS distributes all it's fixes in one large patch is a fool. I should know,
    I was engineering lead on www.starbucks.com, one of MS most prominent sites.

    In order to deploy a server, we would apply the latest service pack and then
    between 30-60 hot-fixes. And that was just for the default software. Other
    packages, like SQLServer, had at least two dozen hot-fixes.

    A lot of times, these would conflict with each other in strange ways, and
    uncover other bugs, which made it very difficult to deploy any fixes at all.
    I would often try them out on my desktop (an NT Server) first so as not to
    endanger the development environment. We even had one case where a hot-fix
    wiped out our SourceSafe DB....

    In contrast, the two Un*x OSs I use on a regular basis, Solaris and Linux,
    have no such problems. Packages and RPMs are small, well-defined fixes to
    particular problems, not some ubber-thing that has to itself be patched.

    I don't know where you get your writers from, but I sure am glad I don't
    read any of your publications. And with information like this (i.e. totally
    useless and factually incorrect), it's doubtfull that I ever would.

    Chris Maresca
    Project Engineer, Organic Online, Inc.
    ckm@organic.com

    [/QUOTE]
  • by JoeShmoe ( 90109 ) <askjoeshmoe@hotmail.com> on Monday October 04, 1999 @09:31PM (#1638939)
    I like how I go to one website, and it automatically tells me what I do or do not have installed. Then I get presented with a list of new patches, arranged neatly into ranks like Critical, Highly Recommended, Fun and Games, even Beta Testing. I can even get told within minutes of a new critical patch being posted by installing Microsoft's Critical Update Notifier. Each patch included a description of the component involved so I can choose if it is right for that computer. Then, after checkmarking all the items I want, click a button to download and install the patches automatically.

    This is, in my opinion, a good system and I compliment Microsoft for adopting it. I only wish that the *nix community would be willing to host similar update servers, particularly for the popular distributions.

    There are just a couple things that I think should be changed:

    1) Link to knowledge base and security alerts. When I see an item listed, I want more than just a one or two line blurb. And vice versa...if I get a security alert on a mailing list, or find a reason why I'm getting a certain bug, I want to click a link and see the fix added to my downoad queue.

    2) Make it easier for it to work with secure or offline servers. I should be able to download an ISO image that contains an entire copy of the update website. So, all I have to do is pull down the ISO, burn it, pop it into the CD-ROM of the secure or offline server and PRESTO! I can browse a local copy of the same update site.

    3) Download histories with option to uninstall. Right now my Windows Updates are buried under a half dozen items in some Add/Remove Programs control panel. I'd rather be able to see a list (sorted by date) of items I have installed so I can check off the one I want to uninstall. So, if I SWEAR it's a patch that is causing my problem (even if tech support doesn't agree with me) I don't have to reinstall to get rid of it.

    Service Packs stink because I get a whole bunch of stuff I DON'T want just to get the one of two things I DO want. The only reason I install Service Pack 3 on stand-alone machines is so I can install MSIE...and the only reason I install Service Pack 5 on those same machines is so I can use 17GB hard drives. Sure, I could probably abort the install after it decompresses the files and just install the new ATAPI.SYS file...but then I'm skating on "unsupported territory". So I have cross my fingers and pray that this isn't another Service Pack 2 or Service Pack 4 or lose my support options.

    I think everyone agrees that individual patches would be better since it allows ultimate user control. The only problem has been keeping tracking of where they are, what they do, and which have been installed. So, let's get them all organized...how about it?

    - JoeShmoe

    -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= -=-=-=-=-=-=-=-
  • by Coda ( 22101 ) on Monday October 04, 1999 @09:46PM (#1638940) Homepage
    It's a little-known fact, but ZDNet recently held a car security test. They left two cars equipped with different security systems on the streets of LA, to see which ones real-life crooks could steal. The first car, equipped with MS MySafeCar, was locked, secured, and parked next to the second car, which was a convertible with the top down, keys in, and Linux Carsec turned off. The second car was stolen, prompting ZDNet employees to rejoice and marvel at the advertising budget for, er, security miracle that is MS MySafeCar.

    When Carsec proponents noted the discrepancy between the two cars, ZDNet replied that "the average car user would not want to lock 2 to 4 individual doors."

    ZDNet, in response to the information that Carsec comes with power locks, stuck their fingers in their ears and starting humming "Ol' MacDonald."

    Do they have a point? Yes, atop their heads.
  • by Lucius Lucanius ( 61758 ) on Monday October 04, 1999 @09:41PM (#1638941)
    In an update to the story, an anonymous source at ZDNet admitted that they used a genuine IT manager during the tests. "The decision not to apply the fixes came about due to our adherance to realistic simulations. We feel most IT managers are clueless, so we used a representative sample from our own labs. He made the decision," said the source, speaking under conditions of anonymity. "We feel this better represents the real world scenario."

    In unrelated news, seismologists reported a strange disturbance, which they claimed was caused by thousands of sysadmins nodding their heads in agreement at the same time. The phenomenon has tentatively been titled "the Slashdot Effect".
  • by jem ( 78392 ) on Monday October 04, 1999 @10:03PM (#1638942) Homepage
    <RANT>
    Having been an NT admin for awhile... It is not just a question of installing five huge service packs. And I'm not talking about hotfixes either.

    There are a number of pieces of software from Microsoft that require the service packs to be applied in differing order:

    The place I used to work before used Site Server (extension to IIS). For the personalisation feature to work on this, a completely bizare sequence had to be followed:

    Install (approximate - I think this was more complicated):
    Service Pack 3
    Internet Explorer 4
    Option Pack 4
    (some crucial DLLs have now been deleted/overwritten with incompatible versions)
    Service Pack 3
    Option Pack 4
    Site Server 3

    You can now install Service Pack 4 & 5 if you want more things to break or you can cut your losses and stick to things that you know work (even if they aren't secure).

    The problem with this process is that it is badly documented, denied on Microsoft's site and unknown to most MS users. We got this process from someone who spent days installing and uninstalling the software until it worked. Therefore it takes *days* to install a "decent" version of NT.

    This is not the worst bit. The worst thing is that we bought Site Server for all of those built in features (many of which simply didn't work). It wasn't cheap and we ended up just writing our own stuff due to the poor quality of the documentation, lack of speed (dual Pentium Pro, 128MB RAM) and general flakiness.

    The problem with all this software is that Microsoft doesn't write applications anymore. Everything has hooks in the O/S which means that departments within MS end up writing software that messes with everything. Incompatibilites arise and no-one is willing to tell you how to fix it without charging you huge consultancy fees.

    My new web server boxes run Linux. When fixes come in, thousands of users are willing to help you out with any problems you have. They actually know. The applications do not send tentacles into the O/S, choking functionality out of other applications. My sites run fast. I never need to write ASP in my life ever again. I'm happy again.

    Other example? To get a certain feature of MS Visual Interdev running on her machine, a friend of mine had to remove Service Pack 5 & 4 from her machine (Then re-install SP3). Only then would database diagrams re-appear as a feature...

    I sense that many people here have not actually really experienced the joys of NT first hand. It is much more of a nightmare than you think. And good NT admins simply don't seem to exist. I'm sure there are some out there. Maybe. The recent joys of the Windows 2k machine that MS couldn't keep up due to running out of disk space, etc indicate that there simply aren't any. Even at MS.

    I also know of a well know a major UK hosting provider which is withdrawing the NT dedicated server hosting. Too many problems. Too many security holes. Really bad remote management tools. End of story.
    </RANT>

With your bare hands?!?

Working...