Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Windows Servers Beat Linux Servers 709

RobbeR49 writes "Windows Server 2003 was recently compared against Linux and Unix variants in a survey by the Yankee Group, with Windows having a higher annual uptime than Linux. Unix was the big winner, however, beating both Windows and Linux in annual uptime. From the article: 'Red Hat Enterprise Linux, and Linux distributions from "niche" open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said. The reason: the scarcity of Linux and open source documentation.' Yankee Group is claiming no bias in the survey as they were not sponsored by any particular OS vendor."
This discussion has been archived. No new comments can be posted.

Windows Servers Beat Linux Servers

Comments Filter:
  • Same as last year. (Score:4, Insightful)

    by Whiney Mac Fanboy ( 963289 ) * <whineymacfanboy@gmail.com> on Wednesday June 07, 2006 @12:10PM (#15487921) Homepage Journal
    Lets look at last years survey being debunked in a business week analysis. [businessweek.com] ('cause I'm sure not a damn thing's changed since last year's study).

    The biggest criticism of the study is this:

    Only people running w2k3 AND linux were allowed to respond. Hmmmmmn, so how many MS shops with an evaluation linux server (installed by their clueless MSCE) were included in this "survey"

    Yankee group can claim no bias all they like - but I am sick of Laura DiDio [wikipedia.org] fud being posted here (Oh she of 'SCO's claims are justified after looking at the source' fame).

    Call this ad-hominem if you like, but if someone pushes a POV year in, year out, you tend to dismiss them.
    • by grub ( 11606 ) <slashdot@grub.net> on Wednesday June 07, 2006 @12:12PM (#15487951) Homepage Journal

      It was by Laura DiDio. They may as well have had Steve Ballmer make the judgement.
    • by semifamous ( 231316 ) on Wednesday June 07, 2006 @12:15PM (#15487977)
      Another tech site has an editorial article on this report [neowin.net].

      From the editorial:
      I administrate both Windows and Linux servers and was interested to see this report. However, reading into the article a bit more makes me question the validity of their assessment.

      The Yankee Group states that Windows 2003 Server led Red Hat Enterprise Linux with nearly 20% more annual up time.

      I had to do a double take when I saw that. 20% more!? Assume for a moment that you have two servers, one running Windows Server 2003 and one running Red Hat Enterprise Linux 4. Assume that your Windows box ran non-stop, without rebooting (which means you probably are not loading any Microsoft security updates) for 365 days. For your Linux box to have 20% more downtime it'd have to only be up for 292 days. If that is the case, your machine is no longer a server and is nothing more than a space heater.
      • Math Nitpick (Score:4, Informative)

        by colinrichardday ( 768814 ) <colin.day.6@hotmail.com> on Wednesday June 07, 2006 @12:23PM (#15488053)
        That would be about 304 days, as 20% of 304 is 60.8 (304+60.8=364.8). The 20% must be taken as 20% of the RedHat uptime, not the Windows.

        But yeah, that's way too low for RedHat.
      • by MarkLewis ( 593646 ) on Wednesday June 07, 2006 @12:35PM (#15488176)
        Your math is wrong. 20% more downtime means 1.2 times as much downtime as the Windows box, not 20% of the year.

        So if the Windows box is down for 10 hours per year, the Linux box is down for 12 according to the study.
      • by peragrin ( 659227 ) on Wednesday June 07, 2006 @12:36PM (#15488179)
        Go back and carefully read the study. Windows doesn't have 20% more uptime, Windows has increased their uptime by 20% while Linux was increased by (insert some random number here)

        So if windows servers were available 90% of the time htey have now hit 95% but the linux servers were already at 97-99% uptime so they could only increase by a small margin.

        Whenever didio writes you have to learn to read in between the sentences. She throws fud around(finding Linux documentation online, when you could simply call Red Hat and ask???? especially for RHEL 4.)

        What she wrote was while techincally true, was so twisted as to be a lie. Notice how she refuses to post hard numbers,or other hard data so you can judge for yourself.
        • Windows has increased their uptime by 20% [...] So if windows servers were available 90% of the time htey have now hit 95% [...]

          This doesn't seem correct to me - if Windows "increased its uptime by 20%" from an original uptime of 90% then it would have 90% + (.2 * .9) = 108% uptime (or read a different way, 110% uptime). Clearly, you didn't mean either of these. But even if we were to read the statement as "decreased its downtime by 20%" we would still have 10% downtime - (20% of original downtime, or 2%

          • by a_nonamiss ( 743253 ) on Wednesday June 07, 2006 @03:05PM (#15489374)
            Boy, the maths in this post seem to be getting screwed up pretty bad, but I'll put in my 2 cents to see if that sheds any more light on things.

            Let's use hours. There are 8760 hours in a typical year. (365 x 24)

            Let's say your windows server is down for 30 hours in a particular year. That means it has an uptime of 8730/8760 or 99.66%. Your Linux server has 20% more downtime. That's 36 hours per year. (30 x 1.2) and therefore 99.59% uptime. Is anyone really going to notice a 6 hour per year or 0.07% difference in uptime? (remember, we're not talking specific outages here, just a mathematical statistic - not like "Yeah, if that 6 hours was during our peak time")

            Maybe I got that all wrong, but that's how I read the statistic.
        • by sharkey ( 16670 ) on Wednesday June 07, 2006 @01:16PM (#15488509)

          Windows doesn't have 20% more uptime, Windows has increased their uptime by 20% while Linux was increased by (insert some random number here)

          Well, the article states "Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime."

          That certainly sounds like a claim that Windows has 20 percent more annual uptime than RHEL, expecially since the article doesn't state anywhere that the 20 percent figure was an increase over last year. The only improvement statement made was that "...the major server operating systems all have a 'high degree of reliability,' and have showed marked improvement in the last 3 to 5 years."

          • by FireFury03 ( 653718 ) <slashdot@NoSPAm.nexusuk.org> on Wednesday June 07, 2006 @02:45PM (#15489209) Homepage
            That certainly sounds like a claim that Windows has 20 percent more annual uptime than RHEL, expecially since the article doesn't state anywhere that the 20 percent figure was an increase over last year.

            The article is rather contradictory because after they say Windows has 20% more uptime than Linux they then say:

            On average, individual enterprise Windows, Linux, and Unix servers experienced 3 to 5 failures per server per year in 2005, generating 10 to 19.5 hours of annual downtime for each server.

            So, lets assume (for the sake of argument), worst case figures for Linux - 19.5 hours of downtime a year - lets make it 20 hours for ease of calculation. And best case figures for Windows of no downtime.

            1 year = 365 days = 8760 hours
            So for Linux that's 8760-20 = 8740 hours of uptime per year.

            Windows is alledgedly 20% better than this, so we get 8740*1.2 = 10488 Hours of uptime. Which is 437 days.

            So to summarise, they've said that Linux gets just over 364 days of uptime per 365 days whilest Windows gets 437 days of uptime per 365 days. I want one of those windows servers that can accumulate well over a year's worth of uptime in a year.
            • Believe or not, by linux can do this also. Just get a multi socket server and run debian woody on it. You'll get uptime*sockets...
              I ran this and wonderes how I could have 28days uptime in one week... *g*

              Cheers
              Alienn
            • This is easy to explain. The Windows server is installed on Mars. This allows 437 days of uptime and still plenty of time for patch installations.
            • by jackspenn ( 682188 ) on Wednesday June 07, 2006 @07:12PM (#15490995)
              It is 20% better then the Linux downtime. So that would be that Windows is down about 16 hours a year using the 20 for Linux in your example.

              Couple of points as an RHCE that does both Windows and Linux, I can say that the more I am called to fix Linux machines as an outside consultant the more it pisses me off that each system is configured to the personality of the admin who built it and left, rather then a proven and tested standard. That adds to the amount of time it takes to get a system fixed because there are various smtp, pop and imap servers and various ways to do things that could be the issue with e-mail on a Linux machine, that adds to longer discovery time and in turn longer time to final resolution. Counter that with Exchange 2003 that has published best practices and in most cases one or two ways do do something. This should be common sense to most /.ers. Exchange will not send mail. OK, check the mail queues and the services, look in the event logs. Linux system will not send mail, OK, first figure out what smtp deamon they are using, is it sendmail or not, if not, what. Is it running? Oh, I have two sendmail services, one was installed by RPM, the other was compiled from source, but they previous root user remove those configuration files. And on and on sometimes.

              Personal experience has taught me two things:

              1). Just because I like Linux, doesn't mean it is perfect. Support issues like undocumented server settings, admins who delete or move the source configs they used in building a package and admins who do things "just to be different" hurt Linux uptime. Also when a company has a Linux server and Windows techs, they will let the Windows techs beat on it like monkeys before calling an outside consultant who costs money; that leads to a large part of that 20%.

              2). Your post about 437 days was retarded. No doubt retards everywhere will mod you up as insightful and informative, but that makes your comment no less annoying to people with brains.

              - Eric
              • by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Thursday June 08, 2006 @05:37AM (#15493218) Homepage
                In the case of redhat, you can use the standard mail systems shipped with the OS... Infact, you should never install things manually because then you won't be able to update them using the system package management system.

                The same problem can occur with windows, people could be running any one of many mail servers on it, and they won't all be centrally updated.

                I have encountered the same problems you describe with multiple systems, a consultant sets up the machine and then leaves, it happens with windows too, but less often, and it's much harder to fix when they've made all kinds of weird registry tweaks, usually the fix is to reinstall, leaving the same problems for someone else in the future.

                There really is no excuse for leaving multiple copies of sendmail installed, some from source and some from rpm... But quite often it's necessary to do manual tweaks to any system to make it behave in the way you want... There's also no excuse for not installing your packages through whatever package management system exists, so you can keep track of them and update them more easily.
      • by Ryan Amos ( 16972 ) on Wednesday June 07, 2006 @01:33PM (#15488633)
        I administer both Linux and Windows servers as well. Windows servers (2003 here, specifically, but the same applies to other versions as well) actually work ok and are probably as stable as Linux as long as you don't change anything meaningful on them. Adding users, changing settings, etc is all ok, but don't you dare install anything on a working Windows server without a full, bootable drive copy or a SAN snapshot. That's where Windows servers lose their reliability in my book.

        Blanketly saying "Windows is more/less reliable than Linux!" is flat out wrong (or at the very least, misguided) anyway. What were these machines doing? Were they sitting there just passing packets and not reconfigured once, or are they being constantly tweaked and redeployed? How many people were using them?

        Uptime is also usually measured in percentages in the business world. I'm willing to bet the author of this FUD saw "99% uptime for Linux, 99.2% uptime for Windows... That's 20% more!"
        • Blanketly saying "Windows is more/less reliable than Linux!" is flat out wrong (or at the very least, misguided) anyway. What were these machines doing? Were they sitting there just passing packets and not reconfigured once, or are they being constantly tweaked and redeployed? How many people were using them?

          Also, who was administering them? Did they ahve dedicated Linux admins or were they expecting the office MCSE to handle the machines?
      • The longer uptime is due to time that Windows spends booting up and shutting down. Where as Linux will shutdown quickly and cleanly Windows is still chugging along with prompts say "Are you sure you want to close this program?". This is where it gets its longer up time from.
    • by mrchaotica ( 681592 ) * on Wednesday June 07, 2006 @12:15PM (#15487978)
      Call this ad-hominem if you like, but if someone pushes a POV year in, year out, you tend to dismiss them.

      You shouldn't dismiss them just because they're consistent; they could in fact be consistently right (e.g. RMS).

      Did you perhaps mean that if someone continues to push a POV after their reasoning has already shown to be flawed once you tend to dismiss them because the situation (and their flawed reasoning) is not likely to have changed?

    • Only people running w2k3 AND linux were allowed to respond.

      Shame they didn't ask me. While my win2k3 server is up and has been for a while, that's a far cry from saying it's trouble free. More than that, my linux boxes have been up without complaint for far longer AND are more trouble free AND are running apps that don't run on windows.

      So, were they to ask me, the headlines might have read something like, "Linux more versatile and trouble free than windows counterpart".

      I'll grant you, the win2k3 server is
    • by Crashmarik ( 635988 ) on Wednesday June 07, 2006 @12:20PM (#15488024)
      Documentation for linux is bad. Theres no arguing the point

      I just switched a box from fedora core 4 to core 5 and was real pleased nobody had bothered to document the changes to the default install of Apache. I also can't count the times I have looked for things on the LDP or the HOWTO's and found yes this is a very good howto but the distribution is entirely freaking different.

      Now I'm not saying microsfts documentation is any better, but they make up for it with consistency in the setup. Pretty much once things are set with M$ they are there. By example, You may not like the registry but its pretty consistent in how it works from win95 to win 2003.

      That said once a server is setup and in production why the heck will a lack of documentation bring it down ? I have had Novell servers up for 4+ years at customer sites and they don't even get the docs.
      • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday June 07, 2006 @12:39PM (#15488203)
        I just switched a box from fedora core 4 to core 5 and was real pleased nobody had bothered to document the changes to the default install of Apache. I also can't count the times I have looked for things on the LDP or the HOWTO's and found yes this is a very good howto but the distribution is entirely freaking different.
        100% agreement. Which is why I prefer Debian (although I'm migrating to Ubuntu).

        I can easily clone a production server and walk it through the upgrade process ... over and over and over and over ... and submit bug reports for any and all problems. All during the "beta" phase of the next distribution. I did that prior to migrating my servers to Sarge last year.

        apt-get dist-upgrade

        It is truly awesome. You can test and re-test the entire process every time they release a bug fix for any of the packages you'll be using. (Yeah, you can do it with gentoo, also.)
      • by Cl1mh4224rd ( 265427 ) on Wednesday June 07, 2006 @12:51PM (#15488299)
        That said once a server is setup and in production why the heck will a lack of documentation bring it down ?
        It won't bring it down, but it might keep it down.
      • by Illbay ( 700081 ) on Wednesday June 07, 2006 @01:06PM (#15488440) Journal
        Isn't comparing Fedora to Red Hat Enterprise inappropriate here?

        Fedora is "bleeding edge." Major changes are incoporated from one release to the other, with the time between releases only six or nine months.

        RHEL is extremely stable and well-tested, and the time between major releases is long. Therefore, documentation for RHEL will be "true" for a long time.

        Not the case with Fedora (I use Fedora, btw).

      • by FireFury03 ( 653718 ) <slashdot@NoSPAm.nexusuk.org> on Wednesday June 07, 2006 @03:04PM (#15489362) Homepage
        Documentation for linux is bad. Theres no arguing the point

        I just switched a box from fedora core 4 to core 5 and was real pleased nobody had bothered to document the changes to the default install of Apache.


        Whilest I love Fedora Core, I have many years of Linux experience under my belt. I think it is worth pointing out that Fedora Core is really intended as a testing ground for RedHat and not as an enterprise grade system. If you want things to Just Work and be documented you need to switch to something like RHEL - what you're doing is equivalent to playing with a bleeding edge beta version of Windows and complaining that Microsoft didn't bother to document some brand new feature.
    • by moeinvt ( 851793 ) on Wednesday June 07, 2006 @12:31PM (#15488131)
      What the hell kind of shops/businesses/people are they surveying? People that have their servers running for a couple of days a year??

      "According to the Yankee Group's annual server reliability survey . . . Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime."

      I would think that most businesses want to have their servers up 24/7/365 minus a few hours of scheduled reboots and upgrades, and unless something breaks or crashes. So, assume a Windows 2003 server had PERFECT uptime record for the year.

      365/1.2 = 304.17. So, in order for Windows to beat Linux with 20% more uptime, they're trying to say that a server running RHEL is down more than SIXTY DAYS a year? My BS meter just crashed.

    • Only people running w2k3 AND linux were allowed to respond. Hmmmmmn, so how many MS shops with an evaluation linux server (installed by their clueless MSCE) were included in this "survey"

      Err, that works both ways, doesn't it. Think of all the Linux shops with one little windows server they had to have because some app they needed didn't run on *nix. And IME *nix admins will happily reboot a windows box claiming "it's the only solution" rather than spend 30 minutes actually learning something about how wind
  • 20% more UPTIME? (Score:3, Informative)

    by Anonymous Coward on Wednesday June 07, 2006 @12:11PM (#15487929)
    From the article:
    Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime.
    That means that Red Hat Linux has to have at least 1,461 hours of annual downtime, which is 60 days. (This is so that it would then have no more 5,844 hours of annual uptime, in order to allow 20% more of that to fit into one year at 365.25 days.)

    I don't think so.

    I hate writers who don't understand math.
  • by waif69 ( 322360 ) on Wednesday June 07, 2006 @12:11PM (#15487932) Journal
    I have run both windows servers and linux servers over the last 10 years and my experience is higher uptime with linux servers. Windows machines deal poorly with memory leaking apps and need rebooting for every service pack or required update. I only need to restart specific processes with linux when there is a justified upgrade.
    • I, on the other hand, see just the opposite.

      For years the Linux mantra was that Windows cannot do enterprise, wasn't secure, and on and on... however with a good, well trained administrator behind the console of ANY operating system, it can be made secure, it can do enterprise.

      here, because of the "shoot first, ask questions later" attitudes of the Linux support team, the Linux environment (limited to some Web server farms, SMTP servers and a few SAMBA servers), the uptime is around 99.0%. The Windows
  • by Bazman ( 4849 ) on Wednesday June 07, 2006 @12:11PM (#15487939) Journal
    Our Windows 2003 TS servers have a much longer uptime than our Linux servers that are accessed from our lab. Simply because fewer people choose to use the Windows service....

  • by TripMaster Monkey ( 862126 ) * on Wednesday June 07, 2006 @12:11PM (#15487940)


    Why does Slashdot continue to even acknowledge 'studies' performed by the Yankee Group? You think we would have learned [slashdot.org] our lesson [slashdot.org] by now...

    Hard evidence of collusion may be lacking, but it's still patently obvious that Laura DiDio [wikipedia.org] is a Microsoft shill [groklaw.net].

    Past experience should be enough to show this, but just in case it's not clear enough yet, here's a snippet of TFA:
    But standard Red Hat Enterprise Linux, and Linux distributions from "niche" open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said. The reason: the scarcity of Linux and open source documentation.


    Translation: "We don't know how to support Linux, so it's Linux's fault."

    Also from TFA:
    The Yankee Group made a point of stressing that the survey was not sponsored or supported by any server OS maker.


    I'll bet they did...when you turn out such a ridiculously skewed 'study', you pretty much have to make certain everyone knows how 'unbiased' it is.
  • We'll see lots of defensiveness over this study in the comments, although if the conclusions were different, it would be cheered. Why not accept it and fix the documentation issue?
    • Re:Defensiveness (Score:3, Insightful)

      by digidave ( 259925 )
      The defensiveness comes from the fact that Yankee analyst Laura DiDio repeatedly makes ridiculous claims against Linux. She's the one that said Linux definitely stole SCO's code.

      I don't have access to the full report, but I wonder how the "lack of documentation" came into play. Was a certified admin working each system? Did the admin call vendor support for help resolving any of the incidents? Was the particular problem experienced by each server the same? Hardware or software problems? Were all the servers
    • Re:Defensiveness (Score:3, Informative)

      by vidarlo ( 134906 )
      We'll see lots of defensiveness over this study in the comments, although if the conclusions were different, it would be cheered. Why not accept it and fix the documentation issue?

      Because there are no documentation problems. Do you find an OS with a more well documentet API than Linux? More documentation than Gentoo has? The problem is that they have not studied what I'd dare say is the serious users, they've studied those without in-house competence on Linux.

      *NIX-admins are probably more expensive than

    • Re:Defensiveness (Score:5, Informative)

      by morgan_greywolf ( 835522 ) on Wednesday June 07, 2006 @12:28PM (#15488102) Homepage Journal
      What documentation issue?

      There are boatloads of documentation available. Ever hear of The Linux Documentation Project [tldp.org]? Plus, most distributions offer lots of very good documentation. Why there was a Slashdot story [slashdot.org] just two days ago about the excellent Ubuntu documentation. There are no fewer than 600 books available about Red Hat distros [amazon.com] available for sale on Amazon. Not to mention that Red Hat Enterprise Linux itself includes lots of lots of documentation and most of it is available on the Web gratis [redhat.com]. Plus the hundreds of open source apps that include very good documentation with their package. Have you actually read the documentation and free books available on the Samba website [samba.org]? It's darned good!

      Any perceived documentation issue is Laura DiDiot's head.

    • It's probably more a case of MCSEs that don't grok the concepts of Linux and how it is documented. The survey was supposedly limited to just shops that run both Windows and Linux. That means you are likely dealing with a bunch of MCSEs that have been working with Windows for over a decade and have only in the past couple years been given Linux to also administer. If such a survey were limited to shops that had been running both systems for an equal period of time and have people on staff who are speciali

    • Agreed (Score:3, Insightful)

      by phorm ( 591458 )
      For the record, I've been using linux on servers and for my desktop for years.

      Documentation for big projects (apache, squid, etc) is usually easy to find. However, when you start running between versions and other isssues, suddenly the waters become a bit murky. Google is often friendly, but lately I've been lucky to find docs in english let along for the version(s) of software I'm using.

      I've also been taking my LPI (my employer's idea). It's a freaking linux certification/exam and has no official docum
  • by yagu ( 721525 ) * <yayagu@[ ]il.com ['gma' in gap]> on Wednesday June 07, 2006 @12:13PM (#15487956) Journal

    Another article claiming my OS is better than yours, another article with virtually no information, and the information therein is off-the-scale incomprehensible and inconsistent.

    Here's a casual observation: the article says, "

    Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime.
    " Later in the article, this:
    "..., On average, individual enterprise Windows, Linux, and Unix servers experienced 3 to 5 failures per server per year in 2005, generating 10 to 19.5 hours of annual downtime for each server.
    " Let's just say a Linux server has 24 hours of downtime a year (higher than the "survey" says). That leaves 364 days of uptime in a year, 365 days in a leap year.

    Implied in the article then, a Windows 2003 server would have to be "up" approximately 20% more to satisfy the "claim". Now, I am not a calendar "expert", but I'm having a difficult time believing that Windows 2003 server is up an average of 364 * 1.2, or 436.8 days a year. If it is, I'm buying.

    Also from the article: "..., But standard Red Hat Enterprise Linux, and Linux distributions from "niche" open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said. The reason: the scarcity of Linux and open source documentation...."

    First, this is a survey, it hardly points to data that support this survey, in my book a no-no when trying to prove a point. Secondly, assuming there's truthiness in this, my inference from the previous paragraph is, "Red Hat would be a little easier to set up and use if it had better documentation..."

    • by DragonWriter ( 970822 ) on Wednesday June 07, 2006 @12:22PM (#15488037)
      Implied in the article then, a Windows 2003 server would have to be "up" approximately 20% more to satisfy the "claim". Now, I am not a calendar "expert", but I'm having a difficult time believing that Windows 2003 server is up an average of 364 * 1.2, or 436.8 days a year. If it is, I'm buying.
      Maybe they are measuring "subjective uptime": it only seems like 436.8 days a year when you are supporting a Windows server?
  • I'm glad they backed up their allegations with facts and figures...Oh wait.

    Every time I see an article like this, I view it as utter crap. There are no numbers, there are no sources, and it utterly contradicts my daily experience...Well, except for the "stability of regular unix" bit, which is pretty much a no-brainer.

    I run linux in a work environment, I run linux in my home environment. I get occasional hardware failures, but that's about it. Applications don't lock things up irretrivably. It needs less ba
  • Let's for the sake of amusement Google "Yankee Group" funded microsoft [google.com]

    Or, let's try site:slashdot.org "Yankee Group" [google.com]

    Unbiased? No freaking way.
  • No information on how these "results" were obtained (self-reported?) or anything else that would allow people to figure out if their statistics are biased or not.

    So the study wasn't funded by Microsoft? What does that tell us? If this was research done by asking Windows admins which OS they found had the greatest uptime, wouldn't you expect results along these lines? Of course, we can't know how or why these results were obtained, because the article is essentially four paragraphs saying Windows roxors, Lin
  • Downtime? (Score:3, Insightful)

    by Alioth ( 221270 ) <no@spam> on Wednesday June 07, 2006 @12:14PM (#15487969) Journal
    Three to five down events per year totaling 10 to 19 hours of downtime per year? I'm not SuperAdmin, but NONE of my servers are ever down for that long or that often. Who are they letting run these boxes? What are they doing? Taking the machine into single user mode and recompiling the kernel before rebooting them or something?

    • Re:Downtime? (Score:3, Interesting)

      by normal_guy ( 676813 )
      I don't know. My experience with MySQL, perhaps the most commonly-used application behind Apache, shows a surprising lack of robustness with regards to unexpected power outages or hardware errors. The hours can add up when you're rebuilding large indexes or fixing corrupt tables.
  • by nuggz ( 69912 ) on Wednesday June 07, 2006 @12:15PM (#15487973) Homepage
    How does documentation affect the uptime of a server?

    You need documentation to make changes, not to leave the server alone.

    If you're making changes you're not measuring the reliability of the OS/software, you're measuring software and admin performance.
  • BSD (Score:2, Interesting)

    So where would *BSD fall in. Along with Linux because of the clueless people rebooting it because they don't understand /etc/init.d, or along with UNIX because (I'm linux user myself) BSD users actually do seem a bit on the more experianced side of the fence.
  • Uptime vs. downtime (Score:5, Informative)

    by Martin Blank ( 154261 ) on Wednesday June 07, 2006 @12:16PM (#15487985) Homepage Journal
    Is it 20% more uptime? Or is it 20% less downtime? There's a very, very big difference there -- two months of downtime is pretty severe, and if you have that, you have some serious problems. From the reverse perspective, three nines of uptime allows for nearly nine hours of downtime per year. If that downtime is reduced by 20%, that's nice, but not really noticeable for most users.
  • Total Bullshit (Score:3, Insightful)

    by mabu ( 178417 ) on Wednesday June 07, 2006 @12:19PM (#15488012)
    First and foremost, the whole nature of the design of Unix/Linux provides a means by which software systems can be updated without any service outage. You cannot do this with any version of Windows. Most Windows-based patches and upgrades require a system reboot, which is downtime. Most unix-based upgrades merely require a quick stop/start/HUP of the services. If their main claim is that updating system components is the basis for downtime, they're smoking crack. Maybe their methodology for testing involved taking the entire system down while they upgraded? Unix doesn't require such drastic measures - Windows probably does, as you probably can't update a running service. By design, Windows is exponentially more prone to downtime in the process of patches and upgrades. It's virtually impossible for them to compare the two OSes on this issue and not be dramatically manipulating the test methods to create bogus results that are in no way reflective of how sysadmins patch and manage their server resources. I call BULLSHIT [bsalert.com].

    I have unix servers right now with uptime measured in YEARS. There are no Windows boxes that can make that claim. Period. I've had outages on occasion due to DDOS or system probes that caused a process to terminate over the years, but I've never had any type of wholesale outage that you'd typically get with most Windows installations. Does anyone have any details on the methodology of the testing? It's obviously bogus.
  • WxP Pro (Score:3, Informative)

    by robpoe ( 578975 ) on Wednesday June 07, 2006 @12:20PM (#15488021)
    We have a WinXP Pro box that's been up over a year ...

    Another box that's Win2k pro that's been up almost 2...

    The one app they run is heavily used (dispatch for a 911 center).

    • Re:WxP Pro (Score:5, Insightful)

      by Anonymous Coward on Wednesday June 07, 2006 @12:42PM (#15488226)
      If your windows box has been up for 1 and almost 2 years, respectively, it means that they haven't had security updates applied (which require a reboot). And if your 911 center doesn't keep it's servers patched, you should all be fired.
  • Yankee (Score:5, Informative)

    by Elektroschock ( 659467 ) on Wednesday June 07, 2006 @12:21PM (#15488027)
    http://www.businessweek.com/the_thread/techbeat/ar chives/2005/04/the_truth_about_1.html [businessweek.com]
    http://www.computerworld.com/softwaretopics/os/lin ux/story/0,10801,82070,00.html [computerworld.com]
    Laura DiDio, an analyst at The Yankee Group in Boston, said she was shown two or three samples of the allegedly copied Linux code, and it appeared to her that the sections were a "copy and paste" match of the SCO Unix code that she was shown in comparison.
    DiDio and the other analysts were able to view the code only under a nondisclosure agreement, ... "The courts are going to ultimately have to prove this, but based on what I'm seeing ... I think there is a basis that SCO has a credible case," DiDio said. "This is not a nuisance case."

    Watch the "expert" Laura Didio on video from a credible source:
    http://www.microsoft.com/windowsserversystem/facts /videos/didio_video.wvx [microsoft.com]

    Enjoy her!

    *lol*
  • by Ritz_Just_Ritz ( 883997 ) on Wednesday June 07, 2006 @12:21PM (#15488030)
    How come I never get any of these "impartial surveys"? I have racks and racks of RHEL Linux servers that I only reboot when:

    a. a machine suffers a hardware failure (fairly rare) or
    b. there's a kernel update that impacts security

    In the case of (b), I apply the updated rpms and reboot which normally results in a downtime of approximately 60 seconds for that server. This might happen a few times a year (single digits).

    For our small number of Windows 2003 server boxes, it seems that each "windows update" cycle recommends a restart. We'll call that a once a month reboot when Microsoft gets around to releasing their monthly cleanup. Total server downtime is maybe 2-3 minutes (windows takes a bit longer to reboot on the identical hardware used with our Linux machines).

    So while I *could* say that our windows servers are down XYZ percent more than our Linux servers, in terms of actual downtime, both platforms are about the same, with Linux seemingly holding a small edge in my experience.

    Cheers,
  • by ShyGuy91284 ( 701108 ) on Wednesday June 07, 2006 @12:22PM (#15488035)
    I don't know about uptime, but I used to be a Linux-Only person when it came to servers. After recently falling into a job where I have had to administer Windows servers, I'll admit they are slick...... I picked up workiing with them a hell of a lot easier then I would have a Linux server (if I was new to it). Good LAN support features, ISA, Exchange, license management, fairly easy remote user/computer maintenance..... I'm probably going to give it a shot for my next home server once I get the parts. Although the software is costly if you want to learn it as a hobby (I'm getting it for my home server through MSDNAA).
    • I'm glad you admitted it first. I work with about 300 Linux(slackware, redhat flavor) webservers. Which are great, no problems for the most part, but we also run about 45 Win2k/2k3 IIS/MSSQL/AD/Exchange servers for our Intranet apps and i must also admit that we have no problems with these guys. They are very very easy to manage and setup and ive yet to have a "crashing" problem that wasnt hardware related on either OS. To each his own. Both have their strengths.
  • What? (Score:4, Interesting)

    by C_Kode ( 102755 ) on Wednesday June 07, 2006 @12:22PM (#15488039) Journal
    "Red Hat Enterprise Linux, and Linux distributions from "niche" open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said. The reason: the scarcity of Linux and open source documentation."

    Hmm, thats odd. Linux documentation has always been in great abundance. It's getting information about how OS interals worked that caused me the biggest OS to Application head-aches. (Both Unix and Windows)

    On a broader note, said Yankee analyst Laura DiDio

    Ohhhhhh, I see. Laura DiDio had her nasty little Microsoft-lead hand in this survey.
  • by olddoc ( 152678 ) on Wednesday June 07, 2006 @12:29PM (#15488115)
    According to Netcraft, they have a whopping 4 days since last reboot: http://uptime.netcraft.com/up/graph?site=www.yanke egroup.com/ [netcraft.com] They also go with the bulletproof reliability of MS IIs
  • by Theovon ( 109752 ) on Wednesday June 07, 2006 @12:31PM (#15488134)
    See, I know far too little about system administration. If I were to try to run a Linux server without help, it would be down all the time. If _I_ wanted a server, I'd pay someone a service feel to maintain it for me, and it would be up all the time.

    So, it seems to me that ON AVERAGE, Linux servers would be down more than others, because so many people would be trying to admin themselves. The lack of documentation would definitely be a problem. (Actually, there's plenty of documentation. FINDING it is the problem. I don't know enough to come up with the right Google search terms! And posting to usenet is hit or miss.)

    The question is what the uptime is like for Linux distros where you're paying out the ass for support (like you would for Windows or UNIX anyway). That's got to be such a small portion of Linux servers that it's not dragging the percentages up.

    The real metric should be UPTIME / ($$ spent on support).

    Be careful about those divides by zero.
  • by bblazer ( 757395 ) * on Wednesday June 07, 2006 @12:43PM (#15488236) Homepage Journal
    I am not surprised. Documentation of many open source projects (including linux) is often very poorly written and/or not maintained. Being a good code writer does not necessarily translate into being a good documentation writer. Major software companies hire whole teams of doc writers, and the results are (many times) much better than those that come with OS projects. This has been one of my fundamental points in the never ending discussion of things that are hindering wide spread adoption of OS solutions.
  • by onebuttonmouse ( 733011 ) <obm@stocksy.co.uk> on Wednesday June 07, 2006 @12:44PM (#15488243) Homepage
    Debian Sarge x86: 63 days, 19:43
    Debian Sarge PPC: 61 days, 12 min
    OS X 10.4 PPC: 51 days, 1:02
    NetBSD m68k: 107 days, 37 mins

    So, if you want the highest uptime, use NetBSD on a 25MHz 68040. Further, I contend that my study is at least as believable as the article cited in the submission.
  • by spyrochaete ( 707033 ) on Wednesday June 07, 2006 @12:46PM (#15488266) Homepage Journal
    I wrote a Microsoft-funded white paper last year with the assistance of two subject matter experts - a Microsoft expert and a linux expert, both certified veterans of their fields. The goal was to compare the processes required to set up and administer various services in Windows 2003 Enterprise vs. Red Hat's and SuSE's boxed enterprise server NOSes. Because the white paper was intended for internal use only, we had 100% control over what services would be tested, how to evaluate them, and how to present our findings. We didn't evaluate uptime per se, but I feel my comments are relevant since installation and maintenance contribute to server and client downtime, ergo, uptime.

    We compared many factors including user management, authentication, "ghosting" new machines remotely, remote application installs, file sharing, delegating authority to subordinate administrators, and much much more. The Windows and Linux guys would work on a "lab" side by side, often peeking over to see how the other was doing. At the end of each lab we'd all have a discussion about the number of steps, any problems, company and community support, the ease/frustration factor, and how it went overall. We wrote about all these factors and rated them on 10-point scales per lab, and condensed those into one comprehensive graph showing overall ease-of-use of each NOS.

    Long story short, Windows came out on top by a huge margin in every field - ease, usability, intuitiveness, support, everything. In fact, the only topic where Linux came even close to Windows was in community support, and even that was only 50% of Windows' score. At the end of the project the Linux expert garnered a lot of respect for Windows and quashed most of his prejudices. Needless to say, MS soon compiled our white paper into marketing materials and stuck them on http://www.microsoft.com/getthefacts [microsoft.com] (but it's been replaced by more recent studies).

    I was a little disappointed that we couldn't expand the scope of the test to put stuff like Apache and Squid and mySQL through the paces, but the topic was enterprise administration, not publishing live services. I also would have liked to have tested custom installs of other linux flavours like Debian or Slackware, but neither product had a specific enterprise distribution.

    So don't be too quick to label all pro-Windows studies BS or FUD or other ignorant catch-all acronyms. I personally was funded by MS to spearhead an impartial study, and MS management had a genuine interest in improving their products. I can't speak for the study in TFA, but my own was conducted with nothing but integrity and truthfulness.
    • Like every other system administrator I have to write and read reports or run tests on hardware and software. To shortcut a lot of problems I start by critizising the (far too often flawed) methodology of any study I get before I base a decision upon it. This is not ment as a personal attack, but (maybe because of marketing mangling) I saw real flaws and a lot of bias in the case study that was originally used in Get The Facts. The biases I claim to have seen were subtle and very nasty, but of a complete
  • by golodh ( 893453 ) on Wednesday June 07, 2006 @01:16PM (#15488511)
    Really ... I wouldn't say a word against Communications Majors (as miss Didio is according to the Wikipedia), except where you propose to rely on them to tell you anything accurate, or anything about technical matters.

    Others have already commented on the lack of clarity, the need to read between the lines, the absence of the most elementary numbers and facts about this "study" (as in: how many respondents, how recruited, how many rejected and why, how was uptime defined and measured, what were the uptime numbers, (contingency table by OS this year, contingency table by OS previous year)).

    If any students read this, let me take this opportunity to warn you. Submit a "report" like this to any serious faculty and look forward to an F grade. Unless you're a "Communications Major" obviously, in which case you'll be complimented on the flow of your prose.).

    I'm guessing here of course, but I think that the real study was conducted and written by someone totally different, and miss Didio got the write the "teaser": i.e. the part that you can release without divulging any real information that you would otherwise be required to pay for.

  • Raise your hand... (Score:4, Interesting)

    by PhYrE2k2 ( 806396 ) on Wednesday June 07, 2006 @01:20PM (#15488545)
    Raise your hand if you have read any documentation included with any software you purchased in the past five years. Anyone? Anyone?

    Okay then- raise your hand if you know that there are 600-odd page gorilla Linux reference books out there which may provide documentation should you need it that will be 100x better than anything included with the software.

    Raise your hand if you know where to seek help, such as #linuxhelp and #linux on EFNet.

    Case in point. Why not put a properly run linux server against a properly run Windows server- that is what it comes down to. A trained, professional, and experienced admin who has learnt the software they are running and know it well, in a specific purpose. Put Linux as a fileserver against Windows as a fileserver with any optimizations possible and equivalent configurations that are agreed upon beforehand. Put Linux versus Windows as a Web server with a knowledgable admin. This `good at neither` system doesn`t work!
    -M
  • by owlstead ( 636356 ) on Wednesday June 07, 2006 @01:36PM (#15488662)
    Yes, Windows is better documented... That is, if you are looking for really shallow documentation. For both Linux and Windows, you are way better off by buying a few good books. The GUI documentation of Linux is pretty worryingly bad, but if you go deeper, it gets better. With Windows, it's just the other way around. Even MSDN is pretty bad and (maybe more importantly) one sided. And, if you are trying to watch it on the machine you are working on, prepare for a reboot; MSDN requires the latest Internet Explorer most of the time. I do not expect .NET to improve this situation, with Java application servers you can just unzip the stuff in a folder and run (as with the VM).
  • Wrong assumptions. (Score:3, Insightful)

    by alexfromspace ( 876144 ) on Wednesday June 07, 2006 @01:39PM (#15488681) Homepage Journal
    If you carefully read the quote:
    Red Hat Enterprise Linux, and Linux distributions from "niche" open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said.
    It does not sound right. Every IT professinal I know (including myself) whose company runs both Windows and Linux agrees that Windows breaks all the time while Linux does not. But this statement is not based on the same assumptions as those under which most people operate under. Why this is so becomes apparent when you read the next part of the statement:
    The reason: the scarcity of Linux and open source documentation.
    It is apparent that the author considers any server designated to run Linux but not yet installed (whether partially or at all), to be a downtime server. In other words, this study can easily include hundreds of unused machines that should or could have been running Linux. This is completely laughable because only people on crack could possibly agree that a system that has not yet been setup causes downtime. This assumption would not agree with the definition of the word "downtime".

    I generally find that whenever Linux is being attacked, it is only through a model with serious logical fallacies that are carefully covered over by seemingly innocent mistakes. In reality these are carefully engineered FUDs designed to sound valid to most common people but failing under any serious scrutiny.

    I can conclude from these quotes that the author may feel that Window's point and click interface should somehow justify its inefficiencies compared to Linux. However, Linux's lack of point-and-click gui tools is very old news that got washed away several years ago when tools like Mandrake's free setup tools for Red Hat and SuSE's YAST came about. And besides, it is better to have to learn to setup systems using text config files and then have it run problem free for a year, than to point and click for a day and end up with a system that needs constant attention just to be kept running.

  • A technical note (Score:3, Informative)

    by cecom ( 698048 ) on Wednesday June 07, 2006 @02:19PM (#15488975) Journal
    On Windows it is impossible to delete or replace a file which is in use (e.g. a shared library). The same applies for directories. Thus for any meaningful upgrade you need to restart the applications and often the OS _before_ you can do anything with their files. There are complicated mechanisms for keeping track of files that need to be deleted/replaced after a reboot. It appears that recently they have added yet another even more complicated feature to avoid reboots: http://www.eweek.com/article2/0,1895,1895276,00.as p [eweek.com]

    Such complicated techniques for a basic thing like an upgrade make me very nervous. What happens if something goes wrong with the extensive bookkeeping in the middle of the upgrade ?

  • by UnknowingFool ( 672806 ) on Wednesday June 07, 2006 @02:31PM (#15489066)

    There are always some study that says one OS is better than another. Most often the study is funded by one of the OS groups. That doesn't it necessarily make it useless. What makes them useless is when the details of the study are not released.

    These studies present themselves as scientific but they are not. In true science, the data and the methodologies are presented for scrutiny. There could be issues with either or both that would harm the results. True science involves skepticism.

    Remember a few years ago when some cult claimed that they cloned a human baby. The first reaction was "Can we see and test the baby's DNA?" When the answer was no, the majority of scientists dismissed their claims outright. The minority reserved judgement until there was actual proof.

    Until I can look at the study, I'm not going to believe it. Since no one paid for the study, the Yankee Group does not have any restrictions unless they mean to profit by selling the study.

  • by sloanster ( 213766 ) <ringfan@@@mainphrame...com> on Wednesday June 07, 2006 @02:45PM (#15489208) Journal
    The assertions are ridiculous on the face of it, obviously prepared by someone with an agenda, and not even a bit subtle.

    As an IT professional, I can tell you that if any of our linux servers were to go down, there would be people screaming bloody murder all over the place within a few moments. Downtime is unacceptable for infrastructure services, and linux has performed flawlessly for the fortune 100 company where I am employed.

    I think as other posters have noted, the key piece of information that was unwittingly leaked, was that the survey was only open to windoze shops, and most likely included some mcse's linux test boxes in the downtime data figues. That's really the only thing that makes sense, as downtime simply wouldn't be tolerated in a normal production environment.

    Anyone who is works with linux professionally and is aware of the fact that it's been running 24x7 for years at amazon.com and other firms such as my own employer, will find it quite odd to read about all this extended downtime and the nonsensical reasons given for it.
  • In my shop..... (Score:4, Informative)

    by fatboy ( 6851 ) on Wednesday June 07, 2006 @03:03PM (#15489354)
    I have to reboot Windows2K3 jsut about everytime an update is avaliable from Microsoft. I started using the system only a few months ago.
    I have not had a reboot of the Linux system we use here in well over a year, (448 days to be exact) even though I have updrad applications and applied many patches.
  • BSD Not Evaluated? (Score:3, Interesting)

    by BanjoBob ( 686644 ) on Wednesday June 07, 2006 @03:25PM (#15489504) Homepage Journal
    According to various articles scattered around the net, the Unix flavors included Solaris, HP-UX, etc. But, I have seen no references to NetBSD or FreeBSD as a Unix that was evaluated.

    While boxes are boxes and OSs are OSs, the application that the server is running needs to be factored in. There are many cases where a BSD server may be a better choice than Linux or Windows just as there are cases where Linux or Windows may be the better choice. I found it interesting that I can find no reference to a BSD Unix in any of the links to the study.

    So, since this study has so many unanswered questions relating to function, measurement criteria (what is considered downtime?), application, hardware, etc., the survey is pretty much worthless.

    Box+OS is a tool and I use the right tool for the job. One size does not fit all solutions.

  • by Oriumpor ( 446718 ) on Wednesday June 07, 2006 @03:42PM (#15489617) Homepage Journal
    As far as TFM it's qualifications draw my suspision. Did they include "devices" running linux as well or just full blown rigs? I can tell you *nix based appliances (unless they're really bad) have very few problems, and don't typically require the constant reboots for system updates that drives down your 99.99..999999 uptime.

    Whatever happened to limiting exploitable processes? Windows method of protecting the services is all based around their firewall. Ever try and configure a windows box to run slimmed down? It's a pain in the ass. How about hardened? Good luck, apply the NIST standard lockdown SecPol to a 2k3 box and you'll see what I mean.

    Take a *BSD/Trustix(+SELINUX)/Debian(+SELINUX) box install with 3 services AND a firewall in a 100meg footprint, and call it a day. Windows can't compete with the kinda uptime you get out of a stripped down OS. Oh they try with XP-Embedded and the likes but it's certainly not within the same realm of ease to create and deploy the OS that the *nixes give you. Not to mention, how many times have you had to troubleshoot a problem in Windows that ended up being caused by some unrelated service? I can tell you from my experience, it doesn't happen very often on a machine running single digit numbers of services.

    On top of which they nicely avoided shops smart enough not to run Windows devices in their nocs, who probably have much better trained staff on the unix hardware and would throw their numbers with nearly 0 downtime figures. How many untrained people new to unix reboot when they could have just restarted a service? etc. This whole thing smells fishy.
  • by makemineagrande ( 977054 ) on Wednesday June 07, 2006 @04:08PM (#15489780)
    Here is the note I sent to Laura DiDio - and their PR manager:

    You probably should not read the DiDio-bashing going on over at Slashdot today, but I do see what I believe is an error in the presentation of the data in the press release http://www.yankeegroup.com/public/news_releases/ne ws_release_detail.jsp?ID=PressReleases/news.server reliabilitysurvey.DiDio.htm. [yankeegroup.com]

    The specific statement, "with nearly 20% more annual uptime" is I believe factually not supported by your numbers. Do you mean that Windows has 20% LESS DOWNTIME than RHEL?

    "on average, individual corporate Linux, Windows and Unix servers experience three to five failures per server per year, resulting in 10.0 to 19.5 hours of annual downtime for each server."

    If RHEL had 19.5 hours of downtime, and WIndows had 15 hours of downtime, this would be 20% less downtime. 5 hours less downtime per year is actually real data and would be useful to the press release.

    On the other hand, 20% more annual uptime would actually result in RHEL being down nearly 61 DAYS per year assuming Windows is up 100.000%.Note: 60.8333 days = 365 - (365/1.2)

    ----------- The report may be correct. The press release is most certainly in error.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...