Windows Servers Beat Linux Servers 709
RobbeR49 writes "Windows Server 2003 was recently compared against Linux and Unix variants in a survey by the Yankee Group, with Windows having a higher annual uptime than Linux. Unix was the big winner, however, beating both Windows and Linux in annual uptime. From the article: 'Red Hat Enterprise Linux, and Linux distributions from "niche" open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said. The reason: the scarcity of Linux and open source documentation.' Yankee Group is claiming no bias in the survey as they were not sponsored by any particular OS vendor."
Same as last year. (Score:4, Insightful)
The biggest criticism of the study is this:
Only people running w2k3 AND linux were allowed to respond. Hmmmmmn, so how many MS shops with an evaluation linux server (installed by their clueless MSCE) were included in this "survey"
Yankee group can claim no bias all they like - but I am sick of Laura DiDio [wikipedia.org] fud being posted here (Oh she of 'SCO's claims are justified after looking at the source' fame).
Call this ad-hominem if you like, but if someone pushes a POV year in, year out, you tend to dismiss them.
Re:Same as last year. (Score:4, Insightful)
It was by Laura DiDio. They may as well have had Steve Ballmer make the judgement.
Re:Same as last year. (Score:5, Insightful)
From the editorial:
I administrate both Windows and Linux servers and was interested to see this report. However, reading into the article a bit more makes me question the validity of their assessment.
The Yankee Group states that Windows 2003 Server led Red Hat Enterprise Linux with nearly 20% more annual up time.
I had to do a double take when I saw that. 20% more!? Assume for a moment that you have two servers, one running Windows Server 2003 and one running Red Hat Enterprise Linux 4. Assume that your Windows box ran non-stop, without rebooting (which means you probably are not loading any Microsoft security updates) for 365 days. For your Linux box to have 20% more downtime it'd have to only be up for 292 days. If that is the case, your machine is no longer a server and is nothing more than a space heater.
Math Nitpick (Score:4, Informative)
But yeah, that's way too low for RedHat.
Re:Math Nitpick (Score:5, Funny)
Ah, now we get to the heart of the matter. Obviously Microsoft has managed to pull ahead by padding the output of the uptime command: 20% more characters means 20% more uptime!
Re:Math Nitpick (Score:5, Funny)
If you have a Win2K3 server and a Linux server side by side and they've been running for 120 hours as measured by an independent timepiece,
Linux uptime would report
Windows uptime would report
Re:Math Nitpick (Score:3, Informative)
PS, as someone who administers both Win and Linux servers, I gotta say the report is so full of sh!t it's scary. 233MHz half dead Fedora C3 machine has about a 99.95% uptime. Win2K3 machine with latest hardware, ~99.2%. Um, lemme think about this.
Re:Same as last year. (Score:5, Informative)
So if the Windows box is down for 10 hours per year, the Linux box is down for 12 according to the study.
Re:Same as last year. (Score:3, Insightful)
Saying that one has 20% more or less downtime than the other doesn't say anything about the absolute value of either one's up/down-time. Both of them could be terrible servers or both of them could be pulling four and five nines, we'd never know from that statement alone.
Too bad TFA doesn't say 20% more downtime (Score:4, Informative)
Re:Same as last year. (Score:5, Insightful)
So if windows servers were available 90% of the time htey have now hit 95% but the linux servers were already at 97-99% uptime so they could only increase by a small margin.
Whenever didio writes you have to learn to read in between the sentences. She throws fud around(finding Linux documentation online, when you could simply call Red Hat and ask???? especially for RHEL 4.)
What she wrote was while techincally true, was so twisted as to be a lie. Notice how she refuses to post hard numbers,or other hard data so you can judge for yourself.
Re:Same as last year. (Score:3, Insightful)
This doesn't seem correct to me - if Windows "increased its uptime by 20%" from an original uptime of 90% then it would have 90% + (.2 * .9) = 108% uptime (or read a different way, 110% uptime). Clearly, you didn't mean either of these. But even if we were to read the statement as "decreased its downtime by 20%" we would still have 10% downtime - (20% of original downtime, or 2%
Re:Same as last year. (Score:4, Informative)
Let's use hours. There are 8760 hours in a typical year. (365 x 24)
Let's say your windows server is down for 30 hours in a particular year. That means it has an uptime of 8730/8760 or 99.66%. Your Linux server has 20% more downtime. That's 36 hours per year. (30 x 1.2) and therefore 99.59% uptime. Is anyone really going to notice a 6 hour per year or 0.07% difference in uptime? (remember, we're not talking specific outages here, just a mathematical statistic - not like "Yeah, if that 6 hours was during our peak time")
Maybe I got that all wrong, but that's how I read the statistic.
Re:Same as last year. (Score:5, Insightful)
Windows doesn't have 20% more uptime, Windows has increased their uptime by 20% while Linux was increased by (insert some random number here)
Well, the article states "Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime."
That certainly sounds like a claim that Windows has 20 percent more annual uptime than RHEL, expecially since the article doesn't state anywhere that the 20 percent figure was an increase over last year. The only improvement statement made was that "...the major server operating systems all have a 'high degree of reliability,' and have showed marked improvement in the last 3 to 5 years."
Re:Same as last year. (Score:5, Insightful)
The article is rather contradictory because after they say Windows has 20% more uptime than Linux they then say:
On average, individual enterprise Windows, Linux, and Unix servers experienced 3 to 5 failures per server per year in 2005, generating 10 to 19.5 hours of annual downtime for each server.
So, lets assume (for the sake of argument), worst case figures for Linux - 19.5 hours of downtime a year - lets make it 20 hours for ease of calculation. And best case figures for Windows of no downtime.
1 year = 365 days = 8760 hours
So for Linux that's 8760-20 = 8740 hours of uptime per year.
Windows is alledgedly 20% better than this, so we get 8740*1.2 = 10488 Hours of uptime. Which is 437 days.
So to summarise, they've said that Linux gets just over 364 days of uptime per 365 days whilest Windows gets 437 days of uptime per 365 days. I want one of those windows servers that can accumulate well over a year's worth of uptime in a year.
Re:Same as last year. (Score:3, Funny)
I ran this and wonderes how I could have 28days uptime in one week... *g*
Cheers
Alienn
Re:Same as last year. (Score:3, Funny)
Re:Same as last year. (Score:3, Funny)
What if you put a Linux server on Venus?
Then Windows is from Mars and Linux is from Venus?
I smell a book franchise here.
Re:Same as last year. (Score:4, Insightful)
Couple of points as an RHCE that does both Windows and Linux, I can say that the more I am called to fix Linux machines as an outside consultant the more it pisses me off that each system is configured to the personality of the admin who built it and left, rather then a proven and tested standard. That adds to the amount of time it takes to get a system fixed because there are various smtp, pop and imap servers and various ways to do things that could be the issue with e-mail on a Linux machine, that adds to longer discovery time and in turn longer time to final resolution. Counter that with Exchange 2003 that has published best practices and in most cases one or two ways do do something. This should be common sense to most
Personal experience has taught me two things:
1). Just because I like Linux, doesn't mean it is perfect. Support issues like undocumented server settings, admins who delete or move the source configs they used in building a package and admins who do things "just to be different" hurt Linux uptime. Also when a company has a Linux server and Windows techs, they will let the Windows techs beat on it like monkeys before calling an outside consultant who costs money; that leads to a large part of that 20%.
2). Your post about 437 days was retarded. No doubt retards everywhere will mod you up as insightful and informative, but that makes your comment no less annoying to people with brains.
- Eric
Re:Same as last year. (Score:4, Informative)
The same problem can occur with windows, people could be running any one of many mail servers on it, and they won't all be centrally updated.
I have encountered the same problems you describe with multiple systems, a consultant sets up the machine and then leaves, it happens with windows too, but less often, and it's much harder to fix when they've made all kinds of weird registry tweaks, usually the fix is to reinstall, leaving the same problems for someone else in the future.
There really is no excuse for leaving multiple copies of sendmail installed, some from source and some from rpm... But quite often it's necessary to do manual tweaks to any system to make it behave in the way you want... There's also no excuse for not installing your packages through whatever package management system exists, so you can keep track of them and update them more easily.
Re:Same as last year. (Score:3, Informative)
10 to 20 hours of downtime a year for a server? That's awful!. Heck, the last place I was at the linux box (Red Hat 9.0) only had 2 downtime incidents in over a year after it was hooked to a UPS - one of those was caused by a 6-hour power outage (the power co was installing new trunk lines, transformers, etc all along the highway as part of an upgrade to the provincial grid), and another by a lightning strike that, again, killed the power longer than the hour of runtime for the UPS.
Of course, AFTER I lef
Re:Same as last year. (Score:3, Interesting)
Re:Same as last year. (Score:5, Informative)
Blanketly saying "Windows is more/less reliable than Linux!" is flat out wrong (or at the very least, misguided) anyway. What were these machines doing? Were they sitting there just passing packets and not reconfigured once, or are they being constantly tweaked and redeployed? How many people were using them?
Uptime is also usually measured in percentages in the business world. I'm willing to bet the author of this FUD saw "99% uptime for Linux, 99.2% uptime for Windows... That's 20% more!"
Re:Same as last year. (Score:3)
Also, who was administering them? Did they ahve dedicated Linux admins or were they expecting the office MCSE to handle the machines?
Re:Same as last year. (Score:3, Funny)
Re:Same as last year. (Score:3, Insightful)
They also blame lack of Linux documentation for the downtime. I'm failing to see how that argument works though:
1. Only an idiot admin would take the server down for maintenance until they have the documentation needed to do the job.
2. In my experience, documentation for Linux systems is a lot more readilly accessible than the docs for Windows systems. Yeah, it may not come in a big printed bo
Re:Same as last year. (Score:5, Insightful)
You shouldn't dismiss them just because they're consistent; they could in fact be consistently right (e.g. RMS).
Did you perhaps mean that if someone continues to push a POV after their reasoning has already shown to be flawed once you tend to dismiss them because the situation (and their flawed reasoning) is not likely to have changed?
Re:Same as last year. (Score:3, Insightful)
Shame they didn't ask me. While my win2k3 server is up and has been for a while, that's a far cry from saying it's trouble free. More than that, my linux boxes have been up without complaint for far longer AND are more trouble free AND are running apps that don't run on windows.
So, were they to ask me, the headlines might have read something like, "Linux more versatile and trouble free than windows counterpart".
I'll grant you, the win2k3 server is
Re:Same as last year. (Score:4, Insightful)
I just switched a box from fedora core 4 to core 5 and was real pleased nobody had bothered to document the changes to the default install of Apache. I also can't count the times I have looked for things on the LDP or the HOWTO's and found yes this is a very good howto but the distribution is entirely freaking different.
Now I'm not saying microsfts documentation is any better, but they make up for it with consistency in the setup. Pretty much once things are set with M$ they are there. By example, You may not like the registry but its pretty consistent in how it works from win95 to win 2003.
That said once a server is setup and in production why the heck will a lack of documentation bring it down ? I have had Novell servers up for 4+ years at customer sites and they don't even get the docs.
Obligatory Debian post. (Score:4, Informative)
I can easily clone a production server and walk it through the upgrade process
apt-get dist-upgrade
It is truly awesome. You can test and re-test the entire process every time they release a bug fix for any of the packages you'll be using. (Yeah, you can do it with gentoo, also.)
Re:Same as last year. (Score:5, Insightful)
Re:Same as last year. (Score:5, Insightful)
Fedora is "bleeding edge." Major changes are incoporated from one release to the other, with the time between releases only six or nine months.
RHEL is extremely stable and well-tested, and the time between major releases is long. Therefore, documentation for RHEL will be "true" for a long time.
Not the case with Fedora (I use Fedora, btw).
Re:Same as last year. (Score:4, Insightful)
I just switched a box from fedora core 4 to core 5 and was real pleased nobody had bothered to document the changes to the default install of Apache.
Whilest I love Fedora Core, I have many years of Linux experience under my belt. I think it is worth pointing out that Fedora Core is really intended as a testing ground for RedHat and not as an enterprise grade system. If you want things to Just Work and be documented you need to switch to something like RHEL - what you're doing is equivalent to playing with a bleeding edge beta version of Windows and complaining that Microsoft didn't bother to document some brand new feature.
Re:Same as last year. (Score:4, Insightful)
be realistic.
Re:Same as last year = more BS (Score:5, Insightful)
"According to the Yankee Group's annual server reliability survey . . . Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime."
I would think that most businesses want to have their servers up 24/7/365 minus a few hours of scheduled reboots and upgrades, and unless something breaks or crashes. So, assume a Windows 2003 server had PERFECT uptime record for the year.
365/1.2 = 304.17. So, in order for Windows to beat Linux with 20% more uptime, they're trying to say that a server running RHEL is down more than SIXTY DAYS a year? My BS meter just crashed.
Re:Same as last year. (Score:3, Insightful)
Err, that works both ways, doesn't it. Think of all the Linux shops with one little windows server they had to have because some app they needed didn't run on *nix. And IME *nix admins will happily reboot a windows box claiming "it's the only solution" rather than spend 30 minutes actually learning something about how wind
Re:They cannot beat my uptime. (Score:3, Insightful)
There ought to be some kind of metric for "software uptime," i.e. the delta between the uptime of your actual services (HTTP, SSH, whatever) and the uptime of the building's mains power / network connectivity, etc. I'm pretty sure quite a few servers I've worked on would be at 100%, or close to it.
Otherwise, any time I start seeing surveys l
Re:They cannot beat my uptime. (Score:3, Interesting)
As far as the other stuff you mention goes - none of that requires substantial downtime. Sure, if you're making an application change you might need to work out what dependencies need to be updated, but it isn't like you're going to do that while production is down.
If you're running a server th
What you want is "deborphan" and "debfoster". (Score:4, Informative)
With Debian, grab deborphan and debfoster and you can weed out un-needed packages quickly and easily.
"deborphan" compares the dependencies of each package so you can see packages that are installed that nothing else needs. Delete the ones that you don't need.
"debfoster" shows what all the dependencies are for a particular app. For example, Apache can have all kinds of packages it is dependent upon. If you want to get rid of that app, you can also quickly purge all the packages that were installed as dependencies for that app.
Once you've got the machine stripped down to the basics, just check all the files in the non-home/non-data/non-log directories to make sure that they each belong to a package. Or that you know why you put them there.
It runs sweet.
It runs clean.
It runs exactly what you want.
Nothing more/nothing less.
Which makes patching the box soooooooooo much easier. And it means that you have fewer potential security holes because you're running fewer apps.
Re:They cannot beat my uptime. (Score:5, Informative)
root[loki:/]# w
10:57am up 1030 day(s), 21:27, 1 user, load average: 0.05, 0.02, 0.04
This happens to be a Solaris 9 system. It has never crashed. Actually, over the past 5 years we have had 1 software related bug take down one of our solaris systems (multipathing bug in the FC drivers when used with active/passive disk arrays). This is based on an environment with 40+ solaris based servers (running a wide variety of services, this is not a '40 identical servers shop')
The best our windows boxes can manage is 6 months (and that is if we skip a few of the security patches).
I can guarantee that during the past 3 years, every single one of our windows systems (60+ servers) has had an issue that is core OS software related (not counting the security related ones). Kernel memory leaks are the most popular (file server reboot every 115 days or it will freeze up). Security worms are another fun one, but kinda rare today compared to the good old days.
Re:They cannot beat my uptime. (Score:3, Informative)
In acronym land, you call BS on TFA, BTW.
Re:Same as last year. (Score:5, Informative)
I'm a Windows admin. It's what I know, and the only OS I have significant experience with. At my last job, the server with the most uptime was a RHEL3 box that only got rebooted when the ERP database performed its semi-annual crash ritual. Compare that to the four W2k3 boxes that were down about five or six days a year on average for various OS maintenance issues (in Microsoft's defense, we were *doing* a lot more with the Win servers, the Linux server only had one function)
Linux is a hard OS to administer without training. It's not something you can just dive into, and a lot of admins get it shoved on them because upper management decides on a software package that requires it. The result? Downtime because the admin is unfamiliar with Linux and doesn't know where to find the answers. So in that sense, this report is spot-on.
I do question the validity of the data, though. It seems like they picked a sample set that would yeild the results they wanted. A better survey would be to review servers with similar functions, regardless of whether users have both installed. It's no secret that Windows admins have a harder time with Linux and I agree something needs to be done to help them (us) take the plunge with confidence...but this study isn't going to have any impact on anything and was just a waste of someone's money. If they're looking to throw cash away, they should be throwing it at me, not studies.
Re:Same as last year. (Score:4, Insightful)
You know what?
There is something that's been done.
You can actually download just about every Linux distro... for free
Most competant linux admins have done so, even if it's to set up a server situation on some age-old hardware they have lying around to learn how to do things. Also, doing so on older hardware usually forces them to learn how to make the installed server OS more streamlined and efficient so that they can do more with the hardware they have on hand.
I for one, have single-handedly set up a local library with a 500MHz Pentium/256 meg ram that handles all their database and file server needs. I did a testbed on a 233 I had sitting in a closet and had everything down to the tweaked config files ready to go over a month before the project came into fruition. The cost to the library was a 128 meg stick of ram and a 100G hard drive, since I donated my time to them, and they already had the other hardware.
I'd never set up a system exactly like that, and it's worked perfectly for them for the last 6 months, with zero downtime. Took me about 3 days to figure out packages and tweaks for their particular needs. Onsite, it took me about an hour from blank hard drive to them being in full production. They put about $200 into it, and it replaced an aging "server" some salesmen had sold the county for about $3000 that they'd never been able to keep up and running for more than a week on their own.
Can't do Linux? Download it and learn yourself. Anything less is just excuses.
Re:Same as last year. (Score:3, Insightful)
That is easy to say, but I'm not so sure it is fair. Just installing a Linux desktop or mock server is nothing compared to actually running an maintaining a production system. Sure, you can learn the basics, but you're not going to be proficient (and get good uptimes) until you've had real world problems/issues to deal with. Especially if the Windows admin is too young to have much DOS/CLI experience.
It helps to have some minor fu
Re:Same as last year. (Score:3, Insightful)
Easing self-teaching is useful, but it's hardly an excuse for poor or sparse documentation. Self-teaching inherently takes more time (and time is money).
It's easy to learn to do simple things (as in your library example), but where complex business needs are involved, things can get more complex extremely quickly. Documentation and tools help greatly at this point.
The fact that one can teach oneself for free is a great advant
20% more UPTIME? (Score:3, Informative)
That means that Red Hat Linux has to have at least 1,461 hours of annual downtime, which is 60 days. (This is so that it would then have no more 5,844 hours of annual uptime, in order to allow 20% more of that to fit into one year at 365.25 days.)
I don't think so.
I hate writers who don't understand math.
I'm just not seeing it (Score:5, Informative)
Re:I'm just not seeing it (Score:3, Insightful)
For years the Linux mantra was that Windows cannot do enterprise, wasn't secure, and on and on... however with a good, well trained administrator behind the console of ANY operating system, it can be made secure, it can do enterprise.
here, because of the "shoot first, ask questions later" attitudes of the Linux support team, the Linux environment (limited to some Web server farms, SMTP servers and a few SAMBA servers), the uptime is around 99.0%. The Windows
Yup, agreed. (Score:5, Funny)
Another 'study' by the Yankee Group... (Score:4, Insightful)
Why does Slashdot continue to even acknowledge 'studies' performed by the Yankee Group? You think we would have learned [slashdot.org] our lesson [slashdot.org] by now...
Hard evidence of collusion may be lacking, but it's still patently obvious that Laura DiDio [wikipedia.org] is a Microsoft shill [groklaw.net].
Past experience should be enough to show this, but just in case it's not clear enough yet, here's a snippet of TFA:
Translation: "We don't know how to support Linux, so it's Linux's fault."
Also from TFA:
I'll bet they did...when you turn out such a ridiculously skewed 'study', you pretty much have to make certain everyone knows how 'unbiased' it is.
Re:Another 'study' by the Yankee Group... (Score:3, Funny)
What exactly does bias smell like? ;)
Bullshit.
Defensiveness (Score:2, Insightful)
Re:Defensiveness (Score:3, Insightful)
I don't have access to the full report, but I wonder how the "lack of documentation" came into play. Was a certified admin working each system? Did the admin call vendor support for help resolving any of the incidents? Was the particular problem experienced by each server the same? Hardware or software problems? Were all the servers
Re:Defensiveness (Score:3, Informative)
Because there are no documentation problems. Do you find an OS with a more well documentet API than Linux? More documentation than Gentoo has? The problem is that they have not studied what I'd dare say is the serious users, they've studied those without in-house competence on Linux.
*NIX-admins are probably more expensive than
Re:Defensiveness (Score:5, Informative)
There are boatloads of documentation available. Ever hear of The Linux Documentation Project [tldp.org]? Plus, most distributions offer lots of very good documentation. Why there was a Slashdot story [slashdot.org] just two days ago about the excellent Ubuntu documentation. There are no fewer than 600 books available about Red Hat distros [amazon.com] available for sale on Amazon. Not to mention that Red Hat Enterprise Linux itself includes lots of lots of documentation and most of it is available on the Web gratis [redhat.com]. Plus the hundreds of open source apps that include very good documentation with their package. Have you actually read the documentation and free books available on the Samba website [samba.org]? It's darned good!
Any perceived documentation issue is Laura DiDiot's head.
Linux Documentation issue? For MCSEs? (Score:3, Insightful)
It's probably more a case of MCSEs that don't grok the concepts of Linux and how it is documented. The survey was supposedly limited to just shops that run both Windows and Linux. That means you are likely dealing with a bunch of MCSEs that have been working with Windows for over a decade and have only in the past couple years been given Linux to also administer. If such a survey were limited to shops that had been running both systems for an equal period of time and have people on staff who are speciali
Agreed (Score:3, Insightful)
Documentation for big projects (apache, squid, etc) is usually easy to find. However, when you start running between versions and other isssues, suddenly the waters become a bit murky. Google is often friendly, but lately I've been lucky to find docs in english let along for the version(s) of software I'm using.
I've also been taking my LPI (my employer's idea). It's a freaking linux certification/exam and has no official docum
True, and I think eventually OSS docs will shine (Score:3, Insightful)
I think it's interesting that MS sells a lot of software (Office, for example) that has less-than-great documentation, and that this is also a complaint people have with OSS. Currently there's a market for commercial documentation for both types of software--my local Borders has lots of books on both Linux and Windows Server. But eventually I expect that OSS documentation will improve to the point where it's better than what is provided in proprietary softwar
Re:Defensiveness (Score:3, Insightful)
If I want to modify the software itself, I can grab the latest version from CVS, make my changes, create a patch, and then submit that patch. Maybe it gets taken into the main tree, maybe it doesn't, but in either case there's a known workflow for contributing to the project.
With documentation, it's not so clear. Let's say I wanted to work on the docum
my Math more reliable than Yankee survey (Score:5, Informative)
Another article claiming my OS is better than yours, another article with virtually no information, and the information therein is off-the-scale incomprehensible and inconsistent.
Here's a casual observation: the article says, "
" Later in the article, this: " Let's just say a Linux server has 24 hours of downtime a year (higher than the "survey" says). That leaves 364 days of uptime in a year, 365 days in a leap year.Implied in the article then, a Windows 2003 server would have to be "up" approximately 20% more to satisfy the "claim". Now, I am not a calendar "expert", but I'm having a difficult time believing that Windows 2003 server is up an average of 364 * 1.2, or 436.8 days a year. If it is, I'm buying.
Also from the article: "..., But standard Red Hat Enterprise Linux, and Linux distributions from "niche" open source vendors, are offline more and longer than either Windows or Unix competitors, the survey said. The reason: the scarcity of Linux and open source documentation...."
First, this is a survey, it hardly points to data that support this survey, in my book a no-no when trying to prove a point. Secondly, assuming there's truthiness in this, my inference from the previous paragraph is, "Red Hat would be a little easier to set up and use if it had better documentation..."
Re:my Math more reliable than Yankee survey (Score:5, Funny)
Re:my Math more reliable than Yankee survey (Score:3, Informative)
"Windows 2003 Server, in fact, led the popular Red Hat Enterprise Linux with nearly 20 percent more annual uptime."
Which part of the sentence is unclear? 20% MORE ANNUAL UPTIME.
To achieve this claim, what would your numbers be?
Note that it DOESN'T say "20% more downtime". It is very clear: "20% MORE ANNUAL UPTIME". The MINIMUM requirement to achieve this is 60 downtime days on the RHEL box.
Note that we ARE being "relative": the 60 downtime days is the MINIMUM. Assuming 100% uptime o
Pssh. (Score:2)
Every time I see an article like this, I view it as utter crap. There are no numbers, there are no sources, and it utterly contradicts my daily experience...Well, except for the "stability of regular unix" bit, which is pretty much a no-brainer.
I run linux in a work environment, I run linux in my home environment. I get occasional hardware failures, but that's about it. Applications don't lock things up irretrivably. It needs less ba
Lets google this shall we? (Score:2)
Or, let's try site:slashdot.org "Yankee Group" [google.com]
Unbiased? No freaking way.
Article empty of content (Score:2)
So the study wasn't funded by Microsoft? What does that tell us? If this was research done by asking Windows admins which OS they found had the greatest uptime, wouldn't you expect results along these lines? Of course, we can't know how or why these results were obtained, because the article is essentially four paragraphs saying Windows roxors, Lin
Downtime? (Score:3, Insightful)
Re:Downtime? (Score:3, Interesting)
Documentation for running a server? (Score:3, Interesting)
You need documentation to make changes, not to leave the server alone.
If you're making changes you're not measuring the reliability of the OS/software, you're measuring software and admin performance.
BSD (Score:2, Interesting)
Uptime vs. downtime (Score:5, Informative)
Total Bullshit (Score:3, Insightful)
I have unix servers right now with uptime measured in YEARS. There are no Windows boxes that can make that claim. Period. I've had outages on occasion due to DDOS or system probes that caused a process to terminate over the years, but I've never had any type of wholesale outage that you'd typically get with most Windows installations. Does anyone have any details on the methodology of the testing? It's obviously bogus.
Re:Total Bullshit (Score:3, Insightful)
So you're saying you haven't installed a service patch to your Windows 2003 box that required a reboot in 2.5 years? Care to post the web server address? I'm betting you won't dare.
WxP Pro (Score:3, Informative)
Another box that's Win2k pro that's been up almost 2...
The one app they run is heavily used (dispatch for a 911 center).
Re:WxP Pro (Score:5, Insightful)
Re:WxP Pro (Score:3, Insightful)
Systems like these used in call centers often:
1) Have no route to the internet.
2) Have both external storage drives and USB ports disabled.
3) Do not allow users to log in with administrative accounts.
4) Have proper group policy restrictions in place.
More often then not, even without the latest patches from Microsoft, machines in this state are perfectly secure and stable. Argue if you'd like, but there are plenty of offices I've worked in where the Windows machines aren't even up to SP2, and because
Re:WxP Pro (Score:3, Informative)
"Back in the day" (TM), 911 dispatch was an old green screen with a serial connection to Ma Bell for the ANI/ALI information. Radios were 20 year old cards in a rack of radio equipment. Stuff gets hard to find replacements for, it gets upgraded.
Enter new systems:
The phone switch? Windows controls the user accounts. The phones are Windows interfaces to hardware. Controls which line gets switched to the dispatchers headset. Completely out of my control o
Yankee (Score:5, Informative)
http://www.computerworld.com/softwaretopics/os/li
Laura DiDio, an analyst at The Yankee Group in Boston, said she was shown two or three samples of the allegedly copied Linux code, and it appeared to her that the sections were a "copy and paste" match of the SCO Unix code that she was shown in comparison.
DiDio and the other analysts were able to view the code only under a nondisclosure agreement,
Watch the "expert" Laura Didio on video from a credible source:
http://www.microsoft.com/windowsserversystem/fact
Enjoy her!
*lol*
Re:Yankee (Score:3, Funny)
Looks like she's been getting quite a few free lunches from Microsoft.
Doesn't jibe with reality (Score:5, Informative)
a. a machine suffers a hardware failure (fairly rare) or
b. there's a kernel update that impacts security
In the case of (b), I apply the updated rpms and reboot which normally results in a downtime of approximately 60 seconds for that server. This might happen a few times a year (single digits).
For our small number of Windows 2003 server boxes, it seems that each "windows update" cycle recommends a restart. We'll call that a once a month reboot when Microsoft gets around to releasing their monthly cleanup. Total server downtime is maybe 2-3 minutes (windows takes a bit longer to reboot on the identical hardware used with our Linux machines).
So while I *could* say that our windows servers are down XYZ percent more than our Linux servers, in terms of actual downtime, both platforms are about the same, with Linux seemingly holding a small edge in my experience.
Cheers,
Windows Server is nice..... (Score:4, Insightful)
Re:Windows Server is nice..... (Score:3, Insightful)
What? (Score:4, Interesting)
Hmm, thats odd. Linux documentation has always been in great abundance. It's getting information about how OS interals worked that caused me the biggest OS to Application head-aches. (Both Unix and Windows)
On a broader note, said Yankee analyst Laura DiDio
Ohhhhhh, I see. Laura DiDio had her nasty little Microsoft-lead hand in this survey.
Yankee group website uses win 2000 (Score:5, Informative)
The thing is, it SOUNDS plausible. (Score:4, Insightful)
So, it seems to me that ON AVERAGE, Linux servers would be down more than others, because so many people would be trying to admin themselves. The lack of documentation would definitely be a problem. (Actually, there's plenty of documentation. FINDING it is the problem. I don't know enough to come up with the right Google search terms! And posting to usenet is hit or miss.)
The question is what the uptime is like for Linux distros where you're paying out the ass for support (like you would for Windows or UNIX anyway). That's got to be such a small portion of Linux servers that it's not dragging the percentages up.
The real metric should be UPTIME / ($$ spent on support).
Be careful about those divides by zero.
If it is a doc issue.... (Score:3, Insightful)
My own study (Score:3, Funny)
Debian Sarge PPC: 61 days, 12 min
OS X 10.4 PPC: 51 days, 1:02
NetBSD m68k: 107 days, 37 mins
So, if you want the highest uptime, use NetBSD on a 25MHz 68040. Further, I contend that my study is at least as believable as the article cited in the submission.
MS studies are not just FUD (Score:3, Interesting)
We compared many factors including user management, authentication, "ghosting" new machines remotely, remote application installs, file sharing, delegating authority to subordinate administrators, and much much more. The Windows and Linux guys would work on a "lab" side by side, often peeking over to see how the other was doing. At the end of each lab we'd all have a discussion about the number of steps, any problems, company and community support, the ease/frustration factor, and how it went overall. We wrote about all these factors and rated them on 10-point scales per lab, and condensed those into one comprehensive graph showing overall ease-of-use of each NOS.
Long story short, Windows came out on top by a huge margin in every field - ease, usability, intuitiveness, support, everything. In fact, the only topic where Linux came even close to Windows was in community support, and even that was only 50% of Windows' score. At the end of the project the Linux expert garnered a lot of respect for Windows and quashed most of his prejudices. Needless to say, MS soon compiled our white paper into marketing materials and stuck them on http://www.microsoft.com/getthefacts [microsoft.com] (but it's been replaced by more recent studies).
I was a little disappointed that we couldn't expand the scope of the test to put stuff like Apache and Squid and mySQL through the paces, but the topic was enterprise administration, not publishing live services. I also would have liked to have tested custom installs of other linux flavours like Debian or Slackware, but neither product had a specific enterprise distribution.
So don't be too quick to label all pro-Windows studies BS or FUD or other ignorant catch-all acronyms. I personally was funded by MS to spearhead an impartial study, and MS management had a genuine interest in improving their products. I can't speak for the study in TFA, but my own was conducted with nothing but integrity and truthfulness.
Re:MS studies are not just FUD (Score:3, Informative)
Nothing against "Communication Majors" really. (Score:4, Interesting)
Others have already commented on the lack of clarity, the need to read between the lines, the absence of the most elementary numbers and facts about this "study" (as in: how many respondents, how recruited, how many rejected and why, how was uptime defined and measured, what were the uptime numbers, (contingency table by OS this year, contingency table by OS previous year)).
If any students read this, let me take this opportunity to warn you. Submit a "report" like this to any serious faculty and look forward to an F grade. Unless you're a "Communications Major" obviously, in which case you'll be complimented on the flow of your prose.).
I'm guessing here of course, but I think that the real study was conducted and written by someone totally different, and miss Didio got the write the "teaser": i.e. the part that you can release without divulging any real information that you would otherwise be required to pay for.
Raise your hand... (Score:4, Interesting)
Okay then- raise your hand if you know that there are 600-odd page gorilla Linux reference books out there which may provide documentation should you need it that will be 100x better than anything included with the software.
Raise your hand if you know where to seek help, such as #linuxhelp and #linux on EFNet.
Case in point. Why not put a properly run linux server against a properly run Windows server- that is what it comes down to. A trained, professional, and experienced admin who has learnt the software they are running and know it well, in a specific purpose. Put Linux as a fileserver against Windows as a fileserver with any optimizations possible and equivalent configurations that are agreed upon beforehand. Put Linux versus Windows as a Web server with a knowledgable admin. This `good at neither` system doesn`t work!
-M
Windows documentation (Score:3, Interesting)
Wrong assumptions. (Score:3, Insightful)
I generally find that whenever Linux is being attacked, it is only through a model with serious logical fallacies that are carefully covered over by seemingly innocent mistakes. In reality these are carefully engineered FUDs designed to sound valid to most common people but failing under any serious scrutiny.
I can conclude from these quotes that the author may feel that Window's point and click interface should somehow justify its inefficiencies compared to Linux. However, Linux's lack of point-and-click gui tools is very old news that got washed away several years ago when tools like Mandrake's free setup tools for Red Hat and SuSE's YAST came about. And besides, it is better to have to learn to setup systems using text config files and then have it run problem free for a year, than to point and click for a day and end up with a system that needs constant attention just to be kept running.
A technical note (Score:3, Informative)
Such complicated techniques for a basic thing like an upgrade make me very nervous. What happens if something goes wrong with the extensive bookkeeping in the middle of the upgrade ?
Comparisons and secrecy and independence (Score:3, Insightful)
There are always some study that says one OS is better than another. Most often the study is funded by one of the OS groups. That doesn't it necessarily make it useless. What makes them useless is when the details of the study are not released.
These studies present themselves as scientific but they are not. In true science, the data and the methodologies are presented for scrutiny. There could be issues with either or both that would harm the results. True science involves skepticism.
Remember a few years ago when some cult claimed that they cloned a human baby. The first reaction was "Can we see and test the baby's DNA?" When the answer was no, the majority of scientists dismissed their claims outright. The minority reserved judgement until there was actual proof.
Until I can look at the study, I'm not going to believe it. Since no one paid for the study, the Yankee Group does not have any restrictions unless they mean to profit by selling the study.
ridiculous assertions (Score:4, Insightful)
As an IT professional, I can tell you that if any of our linux servers were to go down, there would be people screaming bloody murder all over the place within a few moments. Downtime is unacceptable for infrastructure services, and linux has performed flawlessly for the fortune 100 company where I am employed.
I think as other posters have noted, the key piece of information that was unwittingly leaked, was that the survey was only open to windoze shops, and most likely included some mcse's linux test boxes in the downtime data figues. That's really the only thing that makes sense, as downtime simply wouldn't be tolerated in a normal production environment.
Anyone who is works with linux professionally and is aware of the fact that it's been running 24x7 for years at amazon.com and other firms such as my own employer, will find it quite odd to read about all this extended downtime and the nonsensical reasons given for it.
In my shop..... (Score:4, Informative)
I have not had a reboot of the Linux system we use here in well over a year, (448 days to be exact) even though I have updrad applications and applied many patches.
BSD Not Evaluated? (Score:3, Interesting)
While boxes are boxes and OSs are OSs, the application that the server is running needs to be factored in. There are many cases where a BSD server may be a better choice than Linux or Windows just as there are cases where Linux or Windows may be the better choice. I found it interesting that I can find no reference to a BSD Unix in any of the links to the study.
So, since this study has so many unanswered questions relating to function, measurement criteria (what is considered downtime?), application, hardware, etc., the survey is pretty much worthless.
Box+OS is a tool and I use the right tool for the job. One size does not fit all solutions.
Bad examples for a bad result. (Score:4, Insightful)
Whatever happened to limiting exploitable processes? Windows method of protecting the services is all based around their firewall. Ever try and configure a windows box to run slimmed down? It's a pain in the ass. How about hardened? Good luck, apply the NIST standard lockdown SecPol to a 2k3 box and you'll see what I mean.
Take a *BSD/Trustix(+SELINUX)/Debian(+SELINUX) box install with 3 services AND a firewall in a 100meg footprint, and call it a day. Windows can't compete with the kinda uptime you get out of a stripped down OS. Oh they try with XP-Embedded and the likes but it's certainly not within the same realm of ease to create and deploy the OS that the *nixes give you. Not to mention, how many times have you had to troubleshoot a problem in Windows that ended up being caused by some unrelated service? I can tell you from my experience, it doesn't happen very often on a machine running single digit numbers of services.
On top of which they nicely avoided shops smart enough not to run Windows devices in their nocs, who probably have much better trained staff on the unix hardware and would throw their numbers with nearly 0 downtime figures. How many untrained people new to unix reboot when they could have just restarted a service? etc. This whole thing smells fishy.
Press Release and Interpretation (Score:4, Informative)
You probably should not read the DiDio-bashing going on over at Slashdot today, but I do see what I believe is an error in the presentation of the data in the press release http://www.yankeegroup.com/public/news_releases/ne ws_release_detail.jsp?ID=PressReleases/news.server reliabilitysurvey.DiDio.htm. [yankeegroup.com]
The specific statement, "with nearly 20% more annual uptime" is I believe factually not supported by your numbers. Do you mean that Windows has 20% LESS DOWNTIME than RHEL?
"on average, individual corporate Linux, Windows and Unix servers experience three to five failures per server per year, resulting in 10.0 to 19.5 hours of annual downtime for each server."
If RHEL had 19.5 hours of downtime, and WIndows had 15 hours of downtime, this would be 20% less downtime. 5 hours less downtime per year is actually real data and would be useful to the press release.
On the other hand, 20% more annual uptime would actually result in RHEL being down nearly 61 DAYS per year assuming Windows is up 100.000%.Note: 60.8333 days = 365 - (365/1.2)
----------- The report may be correct. The press release is most certainly in error.
No Use (Score:3, Insightful)
There is no use. With no real information, this study [yankeegroup.com] is crap. It is just throwing more FUD on the pile. One of my favorite bits: I love that Linux is refered to as a less mature operating system. "Yankee Group determined a significant portion of this outage time is attributed to the scarcity of Linux and open source documentation compared to the more mature, established operating systems."
In some ways (support for 3D graphics HW, sound), Linux is not as developed as Windows or MAC, mostly due to prop