Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Linux vs. NT Reliability 299

buckrogers writes "Bloor Research has finished a year long study that shows that Linux is more reliable than Windows. In the study Linux failed once for four hours because of disk drive issues while Windows failed 68 times for "hardware problems (disk), memory (26 times), file management (8 times), and a number of odd problems (33 times)" for a total of 65 hours. I like the category "odd problems." "
This discussion has been archived. No new comments can be posted.

Linux vs. NT Reliability

Comments Filter:
  • by Anonymous Coward
    "Native Mode" only refers to ActiveDirectory.

    That should have nothing to do with 99% of the problems normal NT shops will encounter.

    And no way will my (soon to be Win2000) shop will ever run in Native Mode. Non-patched Win9x machines will break. Samba machines will break. The MS LanMan Client DOS Boot Disks we use for installs will break. If we had OS/2 machines, they would break. Old School SMB domain authentication is not going away for a long time.
  • No, the story wasn't ridiculous. The original post and yours both have it wrong. Read it for yourself, it says that of the 63,000 items in the list, 20,000+ (IIRC) were what we would consider bugs, mostly long-forgotten issues, etc., in other words, the result of sloppy programming and project management. The story stated that those list items that weren't bugs were requests for improvements and other feedback items.

    Face it, when you make big promises, can't deliver, have to dump a number of promised features because they're impossibly broken, then still deliver late, you're Microsoft.

  • Seeing as both computers had their hard drive crash, BSD can't do much better than the Linux one, unless it's drive was luckier and didn't break. ;) But four hours to change it? That seems to me they didn't have a backup ready to go with the identical OS preloaded. It wouldn't need more than about a half hour to open a case and swap them out. So it sounds like they also had to do some sort of restore from backup, tape drive perhaps...
  • This was only a summary of the report. Look at the URL, it's not bloor-research.com. It took me under two minutes to follow the link to bloor-research.com, click on "Bloor Interactive," and enter Linux in the search box. Top on the list is this study. If you want the full study, as with many of these sort of papers, you'll have to pay the $81US.
  • The thrust of the article seeemed by about using the various OSes as servers... So why in the world does Linux score points for being able to scale downward to run on a Palm Pilot, where as with Windows, you have to choose CE? That means absolutely nothing to the target market.

    The fact that Linux scales as far down as a Palm Pilot is not really relevant. The fact that it scales down very well to some other systems, like 486s, is very relevant. An old 486 with 16MB of RAM can be a very useful Linux system.

    ---

  • My own experience tells me that Linux and Solaris are comparable in reliability.

    I would have to give the edge to solaris. It can withstand the equivalent of a nuclear bomb blast, but it's so damn slow that it'd take 15 minutes to tell whether the damn thing had crashed or was operating normally. So I use Linux and not solaris, with the understanding that Linux may crash in some circumstances solaris would survive. These rarely occur, however.

  • If in fact identical hardware was used, they must have been exceedingly lucky with that used on the linux system. Why? Simple - a true hardware failure will crash the system. Every. Single. Time. If it's a survivable problem (single-bit error, timing glitch, scsi error, failure of non-system disk, etc), then a crash brought on by it is a software problem and cannot be attributed to hardware. So I find it hard to believe that the nt system had 68 times as many hardware failures. Either, as I said, they were exceedingly lucky, or 68 more crashes should be blamed on enntee, not hardware problems. A crash caused by a nonfatal hardware problem is not a hardware problem at all. It's the responsibility of the os to handle hardware problems as gracefully as possible. This means that other than hard, undetectable memory or processor errors (a three-bit error, a cooling problem, etc), pretty much nothing should cause a system crash. Linux does a decent, but not great, job of recovering from hardware problems. I have no idea how well enntee recovers from them because every time I've used it it's killed itself so quickly it's impossible to tell what caused the problem. Linux will at least dump an oops or panic if at all possible, but enntee seems to just freeze up most of the time.
  • I dual booted until September on the following hardware:
    • epox kp6-bx (dual PII) board
    • two celeron 400 cpu's
    • 128 meg RAM,/li>
    • adaptec u2w scsi card
    • voodoo III 2000 agp
    • sblive value!
    • linksys 10/100/fast nic
    • lvd scsi cd-rom
    • scsi hard disk
    • ide hard disk
    With Windows 98 I had no end of problems that I couldn't explain. If I tried changing cd's, I had to reboot because the machine would lock (despite disabling autorun). After I installed 3dfx's upgrade drivers, the system refused to wake on mouse or wake on lan when it went to sleep. When the power went out (which, for a 4-month period, seemed to be every couple days), Windows would be completely jacked. It did things like make the opening sound clip "stutter." I reinstalled it twice during that period because it was convinced that certain hardware/software wasn't there/didn't work (especially the nic (of which, the first really was fubar (thanks Alabama Power), and the replacement was ok)).

    Linux, by contrast performs wonderfully. When it went down during the 4-month period, it made me sit through the standard "you didn't shut down properly--checking disks" routine, but after that, everything worked like a charm. I have no problems with the scsi cd, either. I recently managed a 24-day uptime, (my longest ever ;) )and then got home from school to discover that Netscape had eaten the system resources (oh well).

    Of course, there are tradeoffs. My sound card works a lot better under Windows (I still haven't figured out how to get emu101k to compile for smp), and I've had problems with Palm and Rio utilities. On the whole, though, I'm much happier now that I'm running one OS (Linux), and not fighting Windows every time I want to accomplish something.


    Who am I?
    Why am here?
    Where is the chocolate?
  • I submitted this as a Slashdot story about a year ago...

    That's funny - the very bottom of the article says: "Copyright © Frans Godden. This story was first published in the January 2000 edition of ID-side." Perhaps you could submit an article now about what the stock market did in 2001 - I know I'd like to know.

  • In research, carried out by the Institute of Higher Improbability, Professor Branestawm was able to show an infinite, recursive loop on the Slashdot news site.

    "This is the proof we have been looking for! Escher is alive and well, and living in a bit-bucket at MIT."

    News of the discovery quickly spread, but faied to get passed the infinite waterfall or the perpetual stairs.

  • IIS runs as a service under NT4.

    On those occasions that it refuses to shut down (say when I've some new code and one of my ISAPI threads won't terminate), I always found attaching the debugger and killing it in that worked!
  • it sounds like many of the problems could have been related to whatever hardware they picked.

    Damm! That sure is a tired old line.

    Windows NT works great but some people run it on crappy hardware.
    and the next one they try is
    It's not NT! It's the Admin that can't cut it! Get a qualified person to run NT.

    The newest one they use is that NT is more SECURE cause it's closed. Nice try Guys!
  • He just assumes that since the test gave the result that he expected, then the test must have been perfectly executed.

    Where did I say that?

    I said I was tired of people blaming the hardware, then blaming the people running the hardware, because they think the test is wrong. How could the test be UNFAIR if both NT and Linux used the same hardware. Luck maybe? I think not!
  • If you had to reboot NT more than 68 times, you're simply an incompetent moron.

    That's why I use Linux. It avoids emberassment and allows me to be productive. No reboots. No lost data. No reinstalls. No excuses.
  • I've found linux to be more reliable than NT in my own experience, but the study is to me meaningless unless I see it for myself. I'm not surprised they found linux more reliable, but I couldn't use it as evidence for anything. That's not quite true, since they do give hard numbers on how many failures each system had. The problem is that they don't tell you what hardware they have, or what the machines were doing, or anything important like that.

    It would be cool if somebody who has a copy of the report umm, posted some of it somewhere ;)
    #define X(x,y) x##y
  • ... and that's not a joke. Each time it caused much hilarity at work, but eventually we got tired of the fun and just reformatted all the "company standard O/S's" into Linux, and we've never looked back since.

    Apart from NT, I've never known any other system to crash while in its idle loop.
  • Yeah, sure, there's no disputing that.

    Let me rephrase my point then. NT is the only O/S I know that crashes while doing bugger-all. Happier?
  • Interesting how the difference in availability between 4 hours and 65 hours of downtime is a meager .69%. 99.99% availability represents a total downtime of 52 minutes per year, or 8.6 seconds per day.

    But think about what those extra 61 hours of downtime will cost you if you're an e-business site.
  • Look, you have to change your frame of reference. I see everyone doing cost/benefit analyses, uptime/reboot ratios, etc. for NT4. Why? Windows 2000 is very loosely based on the NT4 codebase. I mean, hell, they actually introduced another 65,000 bugs, so you know that there's a lot of new stuff in there. To be honest, I've found it quite stable, and I'm using Build 2072, which is 100+ builds from the last release candidate. There's probably 130,000 bugs in that version, yet it hasn't crashed for me once. Neither has Linux. In other words, I've found them to be equally stable. Windows and Linux. How's that for strange?

    I'm not saying that Win2K is as stable as Sun, and probably not even as stable as Linux in general, but you shouldn't write it off so quickly. I mean that in two ways: first, if you're looking for a good, easy to use server, give 2000 a try, and also, for the rest of us, we need to beware of the threat that Windows 2000 poses. A lot of the shortcomings which inveterate Linux advocates use as cannon fodder aren't there anymore. I've always used Linux because I believe in open standards and Open Source, and it would be a shame to see Windows 2000 gain ground, what with Microsoft's usage of exactly the opposite.

    --
  • Right on. I noticed that immediately in the study as well - for all we know one of those machines was kept in one of the tech's homes w/o AC down south over a hot summer while the other one was in a proper climate-controlled environment. The results would most definately differ in such a situation.
  • Look, I've been running Solaris, NT and Linux servers in production environments for 5 years now, and I know for a fact that Linux (and Solaris) are many orders of magnitude more reliable than NT, even (indeed, especially) under heavy loads and multiple concurrent applications.

    But this "article"? This isn't a report. It's not research. It's some freelance mom-and-pop computer consultant who works out of his spare bedroom comparing two old machines he set up in a corner of the dining room. Gimme a break. This is no better than the amateur cheese Linux Journal publishes. In fact, I'll bet they'll be reprinting this in a month or two.

    Now that Oracle, Sybase, Informix, IBM, BEA, iPlanet and others have supported products for all three of these OSes, doing apples-to-apples comparisons is easy. Enough with comparing PHP-on-Linux to ASP-on-NT and benchmarking NT on Apache. Let's see what an identical mix of Domino, DB2 and Websphere with IBM's recommended settings can do on identical dual-CPU, major-vendor rackmount servers and be done with it.
  • Today's hardware is very powerful, simple servers like low-usage web servers, email servers, and file servers just don't need the latest 1GHZ machine. Simple machines like the Netwinder (I have NO clue how that performs as a high-usage content server, please don't read into that) can do the job fine. However, to get a simple Windows product that is comperable, you need to scale down to CE. Even if you were to scale down to a 386 (~$15) to do some simple stuff, that is much cheaper than a "Palm Pilot". Server != big machine.
  • >But to make it a good game...we'd have to agree on something that actually beats M$.

    You mean, something that M$ beats?

    If so, how about the frustrated sysadmin, who has been trying to keep the NT box from going down for last 6 days?

    Geoff
  • Ah yes.. well.. let's just hope this wasn't sponsored by RedHat or VA Research or something ;-)
  • I don't usually pull this "moderator baiting" crap, but this comment deserves attention. Drix is right on the money, and in fact maybe a little more than s/he realizes. We're competing with NT when the real competition with Linux is Solaris and Windows 2000. If we want to produce a better OS than Microsoft, we need to produce a better OS than Windows 2000, not a better os than Windows NT or Windows 95.

  • Lab studies like this mean nothing. What matters is running the server in a production environment. My NT server has run for almost a year now since one of it's drives failed in March last year. Before that it had put in another year. NT on my server is really quite stable. The applications on it? Well that's another story. They crash all the time. :-) On the other hand on of my fellow admin's NT servers has to be restarted almost daily due to an NT problem. (Same Compaq box to). I really do hate studies like this.

    I'm pleased with a 1 year uptime. I won't change. On the other hand if I was my buddy I would change to Linux or some other OS that met my real world needs better.
  • Actually Marty isn't the only one to have submitted this story before, I did as well when I came across the original Bloor Research paper. Bloor are a fairly well known and respected IT market research comapny in the UK, specialising in comparative reports of software and systems. However, their reports are always charged for (last one I bought cost £300, say $480), and it is only until they are about a year or so old that they release the material in a less detailed format for free (like this current article).
  • Many businesses still run "old machines" like Pentiums. They are fast enough for many tasks and they don't have the money to buy everyone new machines. Their reliability is usually very good.
  • They did use the "current" version of NT. Windows 2000 isn't officially shipping yet. I can't buy a copy off-the-shelf at my local software store.

    NT's alleged scalability advantages are irrelevant to many businesses. Almost all of the NT boxes that I have seen are standard single or dual processor systems. 4/8 processor systems are very expensive.

  • No -- Stimuli's got a point. Even though NT4 is single user only, all the Admin tools are multiuser and use RPC only.

    It is also impossible to boot an NT machine with attempting to start all the "Automatic" services. I've had situations where the machine would boot, but you couldn't log in locally because winlogon.exe had died, and there was no other NT machine on the network with which to fix this problem. One cries for a unix-style "single user mode" in these situations. (And, no, the Win2K "Safe Mode" still sucks!)

    I think regedit.exe (not regedt32.exe) runs directly against the local registry, that might be a solution. Of course, regedit.exe is not supported for editing on NT4. Catch-22.
    --

  • Reliability(Linux on x86) is greater than Reliability(Solaris x86) -- Driver support and hardware oddities make Solaris/Intel difficult to deal with.

    (And I have proven this using a compaq machine that is right on Sun's HCL.)

    --
  • If the "old Pentium machine" was something like a Compaq Proliant 4500, it shouldn't be a problem.

    When NT 4 shipped these were the 'premiere' machines to run it on, and IIRC, Microsoft still uses quite a few of them for web serving.
    --
  • I don't have a link handy, but a month or two ago InfoWorld (print) published a pie chart showing the causes for NT failures incident.

    The interesting thing was that "Internal NT problem" was just as likely to cause a failure as "Hardware/Drivers"

    The *more* interesting thing is that the data was from Microsoft. (I really wish I had a link handy!)
    --
  • This is the source of most crashes.

    Not on any of the (hundreds) of NT boxes I've had the pleasure to run. 99% of the BSODS are NTFS.SYS or the SCSI or NIC driver, or a memory parity error. Of course, ususally the box just goes to shit without BSODding and needs to be rebooted, but that's usually a background service problem with nothing to do with the GUI.

    I have to think that you've swollowed a line by saying this -- Unix users think that GUI-in-the-kernel is bad (OK, maybe that's true). But then you jump to the "logical" conclusion that that is why NT is not-so-stable. Sorry, the evidence doesn't back you up.

    {I've only seen NT Server crash once on the video driver -- and the box stayed up. It was NT3.51 with the user mode GUI, of course. }
    --
  • This article really doesn't say a whole lot - it doesn't even give any numbers except for downtime. Can anyone find the actual statistics of the study? They didn't provide a link to them, and I can't find it on Bloor's website. It's going to take a little more than the information in this report to convince me, and most anyone else except for current Linux users, to use Linux in a corporate setting.

    The one good point in the article, though, was that neither Linux nor NT are suitable for enterprise environments.

    -lx
  • NT 3.51 was pretty reliable. But when M$ ported the Win95 GUI over to NT, it ran so slow they had to change it so the GUI in kernel mode instead of user mode. This is the source of most crashes.
  • Now that Slashdot has deep pockets, the editors ought to purchase commercial reports like this and spill the beans.

    It's no copyright violation if the contents are paraphrased.

  • The thrust of the article seeemed by about using the various OSes as servers... So why in the world does Linux score points for being able to scale downward to run on a Palm Pilot, where as with Windows, you have to choose CE? That means absolutely nothing to the target market.
  • Actually, you're right. Many of the Windows NT problems are related to application issues. (Of course, some would see this as an inherent weakness of the OS; depending on the circumstance, that may be a deciding factor.)

    I was recently at a Microsoft Partners function that was attended by three Windows 2000 developers. One of them discussed specifically the question of why Windows/IIS web servers needed to be rebooted so often. Here's (approximately) what he said: "We found that they weren't always rebooting because they needed to, but because they wanted it to happen under their control, not when the machine decided it was needed. When we examined the problems, we found most of them were within IIS itself, relating to locked files and non-terminating scripts. Under Windows 2000, IIS runs as a service that you can stop and start by itself without rebooting the OS, and you can schedule it to happen when you want."

    So, in essence, they worked around the problem by providing a more robust solution. Now you can schedule your web services to automatically shut down and restart themselves, without a time-consuming hardware reboot.

    I can say from using both W2K Professional (beta) and Server (gold) that it's far more robust than Windows NT 4.0. You could always achieve good reliability with NT by carefully limiting your choice of hardware and running only software that was needed, but that's no longer going to be as necessary. I still have my beefs with Microsoft, but reliability isn't going to be nearly as high on the list as it used to.
    ----
  • Comparing Linux to NT isn't quite as fair as comparing NT to 98 or Linux 2.2.3 to 1.3.2. I mean they are completely different OS structures and are both programmed in different ways. They're both called to perform the same task but their differences make them hard to actually compare. With Linux you can customize the kernel to fit your hardware exactly, NT has to be run on the hardware right out of the box. If you compared NT to 98 it would kick its ass, same with comparing an older Linux kernel with the newest ones.
  • hey thanks! (pun intended) I didn't know about 'apropos', I will assuredly check it out.

    WRT to #3: agreed, but first you have to *find* the configuration files which took me awhile.
    (for the onlookers: look in /etc)

    -matt

  • >at the command line, getting crap like RTFM (when, in fact, there is
    >no definitive M), and trying to configure ridiculously obfuscated
    >network settings, I'm ready to go back to windows.

    You need to get over the hump; bro. A lot of your problem is probably learning where the docs are (they're there a plenty, just not in an expensively bound, nicely printed manual). You've also got to expect to pay some dues and learning how a different system works.

    I'm in the position of the first poster, but not quite as disheartened (yet). While your response is meant to be encouraging (and it is), it would be even more cheering if it was informative. Eg. the FM is 'right here'. That you couldn't point to a location in your response is a concise illustration of the problem itself. :)

    Anyway, I'm confident that the passage of time and a few million eyballs and few thousand hands documenting the eyeball travails will alleviate the situation. It's just that "instant gratification" is sufficiently engrained in me that I don't want to wait. {smiles}

    -matt
  • <I>Also, they said old Pentiums were used. Would comparing the two operating systems on such old machines be a fair comparison?</I>

    Of course, as long as both os'es was tested with the same kind of hardware. Old pentiums are as reliable as new machines. No problem there, they are slow though. Slowness is not a reliability problem. Crashes are, and the old machines don't crash in much different ways than new ones. Not that it matters much, <B>both</B> os'es ran the old hw, any hw problem inherent in old machines would strike both.

    Linux failed once in this test, no need for NT to fail much more except if it really is worse.

  • I'm normally on the side of Linux in most arguments, but it's possible that a Linux distro may have a similar number of flaws at any one time as Windows 2K. [Note that I'm referring to distribution rather than version of Linux].

    I'm willing to bet that by the time you add up currently open bugs in XFree, KDE, Gnome, sendmail, nntp, Linux itself, the GNU utilities, compilers etc etc etc, you end up with a number certainly in the thousands.

    It may be regarded as unfair to mention problems with these apps in the same breath as problems with the OS itself, but I'm willing to bet that some of the 63000 bugs in W2K include problems with Solitaire, Minesweeper and all the other cruft that makes it a rounded package.

    On the good side, I am willing to believe that in Linux most of these problems will have less effect on the smooth running of the rest of the system.

  • You obviously missed yesterday's story about a Microsoft memo outlining 65,000 bugs in Windows 2000.

  • > I think this is prety much what most people would expect from WinNT 4.

    Gee. And it only seems like yesterday that we were hearing how NT4 was the best thing since sliced bread.

    > It doesnt have a chance against Linux - any distribution.

    So. Now that W63K is (almost) out, you don't feel obligated to believe the Mindcraft benchmarks anymore?

    > Windows 2000, however, is quite a different story.

    Different bugs, but same old story.

    > I'd be very interested to see a similar test performed between Windows 2000 and maybe Debian.

    We eagerly await it as well.

    --
  • Okay. I use both NT & Linux as servers. Have for many years. And here is my offered Expert opinion.

    Regardless of whatever 'studies', or nit-picking.....

    THe big problem is... NT likes to crash when you update software. When you try something new. It wants you to reboot all the time. This may seem normal to NT admins, but really... my VA Server has been rebooted ONCE in the last year, and that's because we had a 24 hour power outage. There is absolutely NO reason to reboot it, unless you are doing hardware modification, or absolutely need to update a driver (which amounts to hardware...) This is the single biggest reason it makes a good server. You can run multiple diferent server applications on it, and work on one without risking the others. In NT, this practice is suicide.

  • My last customer was using Exchange Servers (2)

    I am sure that some people have success with MS Exchange.

    However my experience has been that it is CRAP.

    My employer (a large multinational with about 30,000 employees world wide) uses exchange for both email and group applications.

    The problems with this are legion. Exchange has a workable client for ONE and only ONE OS. Guess which. If you a non-Windows user, forget it. It's proprietary out the wazoo. There is a client for the Mac, but it is missing so many features compared to the Windows client you might as well forget it.

    The servers we have are go through periods of unbelievable flakyness. Sometimes they will work fine for a few months. However if the 'troubles' start, forget it. The servers will be up and down for weeks at a time. And when they are up they will act as if they are running on a Commodore 64, not a high end Compaq server.

    It has gotten to the point a couple of times where my company has threatened to sue Microsoft because of these reliability issues. Twice Microsoft flew engineers out from Redmond to try to try to get the systems working normally. Didn't make any difference.

    I cannot believe that people use MS Exchange as an enterprise mail system of this nature. I think you would be MUCH better off with a Sendmail plus NNTP.

  • I read a statement that the attackers were obviously knowledgeable about both Unix and networks.

    That suggests to me that the attackers were able to plant their zombie programs on Unix machines but not on NT ones.


    I think that is nonsense. It's like something that Microsoft would post as a reason for buying NT in a FUD campaign.

    There are plenty of programs like L0pht and BO 2000 that will turn your NT box into a zombie. The number of people that have had their Windows 9x machines compromised after putting them on a cable modem is legendary.

  • The link you read wasn't the study - it was an article, written by a reporter, outlining the major points of the study.

    Man, calm down... If these little details bother you so much, go out and drop a few grand to buy the actual study. That _is_ how these gruops operate, ya know? They don't just give away the fruit of their labours.

    What you've read is akin to that new bestseller's blurb in your local paper - you can't pick apart the plot line based on that. You want to right to complain about details, go and buy the book!
  • before everyone gets all pissed off at Bloor, let's all look at the URL on that page.

    Huh?... Wait a second! This is from a NEWS SITE (and not a very reputable or technically inclined one, at that). That article is not the report. The report, if it's a typical Bloor report, will be a three hundred page monster that includes serial numbers for the hardware used and core dumps of every application fault.

    This is some reporter's sypnosis of the Bloor report, and as such will obviously cut down on the detail. Maybe it's not enough for you. But maybe it's enough to convince a few CIO's out there to purchase the actual report and see how the products compare.

    You can't criticize the report until you've actually seen it.
  • Notice they don't say what exactly constitues a memory problem. I've had memory problems that cause crashes all the time (I like to call them segment violations) that are caused by single-bit errors in my memory.

    Linux, however, will terminate a program with such a fault. Six times out of ten (in my purely anecdotal experience), NT will require a reboot for a segv in a non-trivial app. That is an important thing to look at for anyone considering using either of these two as servers.

    If I ssh into my webserver to do some remote admin, and I segv linuxconf (as if I'd use it, but...), I can be safe in the knowledge that Apache is still running.

    Ever crashed the remote admin stuff on NT? You take down one Backoffice App, and they all go on sympathy strike.

    People running servers in the real world need to know these things.
  • >But this "article"? This isn't a report.
    >It's not research.

    Very good. This is an ARTICLE about a report. I've found the Bloor group to be pretty good (a hell of a lot better than Gardner, that's for sure), so I think the report would be worth reading.

    But you can't judge the report from what this particular reporter decided was important for a lay-audience. Remember : what you read wasn't the report, it was a sound-byted summary of it's conclusions by a reporter we may or may not trust.
  • That's not a study. Two machines, with no details about hardware, and they're trying to draw conclusions from that? Be real; that's less useful than the infamous NT vs. linux study everyone was up in arms about.

    Be rigorous. It's good for the soul and it prevents people from laughing at you.
  • I've been Beta testing it since November, and I have to admit I'm impressed... but it was only more stable when compared to my NT and 98 stations.

    It was much easier to set up then WinNT 4 or SuSe, didn't crash on install the way 98 does (it usually takes me 3+ reinstalls to complete an install of 98)
    I like the interface, I like the new Admin control layout, and I like that appears (to my eye) to run faster then NT.

    However, I usually have at least one crash a week that requires a reboot, and three or four crashes a week (usually Netscape) that just require restarting the App.

    Incidently (and this is in no way in disparagement of W2k) when Iomega says "We do not currently support beta OS's like w2k" what they are really saying is "Please, Please, For the love of God do not install this on your W2k machine yet!"
    16 hours later I got the OS back... and now I know how to fix it, but still...

    I've had great success running games I didn't expect to run (Descent 3 & Alien Crossfire)
    Not so great success running games I really wanted to run (Mechwarrior 3 & Carmageddon)

    Applications that I expected to run (3d Studio Max 2.5) didn't... but I think that may have been due to a lack on OpenGL support.

    However Photoshop runs great...

    I just got the Final realease... we'll se how is does as a web server up against my Suse machine...
  • From the article:

    "Bloor Research had both operating systems running on relatively old Pentium machines."

    Uhhh, hello? Am I the only one who laughs at a study that purports to judge the reliability of two operating systems based on how they ran on two machines that aren't even new off-the-rack? Yeah, I know that we all use old machines to run Linux for all kinds of uses, but this isn't how we're going to win over the community.

    If someone tested a cancer cure on two people, one who got a placebo and one who got the real thing, I wouldn't go by their "research." They'd be tossed out of the medical community.

    Don't get me wrong, I love Linux, but let's not go trumpeting this as a success for the Linux community. We'd look like idiots. All the Windoze people have to say is, "Great. Now let's try the same thing on six new identical machines with redundant power supplies and drive arrays, just like you would do with a critical server in the real world."
  • If you would just type "go" at the PROM, it would pick up right where it left off...
  • You can Start and Stop the Web publishing service (aka IIS) now with NT4. The problem is that this does not always fix the problem, I have seen overflow attacks that just make the service fail, it just wont serve up pages. No stopping of the service at all. You have to make a monitor do a GET on the server to make sure it is Ok and if not stop the service and see if that fixes the problem.

    I have notice other problems with NT Admins installling software and NOT rebooting, forgetting about it and NOT keeping a server log book. Then when NT does crash they restart it only to BSOD because of some Fuped program they never finished installing. So alot of NT problems come from bad administartion. On NT it is also a bitch to run more then one BIG service on at a time. You need to parcel out File,Print,Web, Email and SQL. Why you say ? well you need to reboot all the time to install software and you can't very well take down your whole company to do it, this is just one reason there are many others.

    We have not even begun to talk about disaster recovery on NT. I would like to hear more on that topic. :)
  • NT has various memory leaks, it has to be restarted or will eventually crash. It could be the apps that are causing it, but it is the apps that give NT all it's functionality so that's no excuse.
  • umm 16X is a bit more than luck. If you don't believe this test I have a quick excercise for you to do. Install a dual boot NT/Linux box, when one crashes switch to the other. At the end of the year tell me which one you used more.
  • I reboot all our NT machines at least once a week (52 times/yr). If you don't they'll take care of it themselves...
  • you can "assume" if you want, but you're the only one who believes it. The ONE and ONLY failure for linux was the HD, causeing a four hour downtime. If replacing a harddrive on a linux machine is roughly akin to replacing one on an NT machine..you rcan remove one crash and four hours from both results. So now, Linux was not down at all and NT was down for 61 hours, ie. Linux is infinitely more stable than NT.

    We can't assume a damn thing, but it's nice to see a study (rather than a press release) that backs up what has been my real-life experience.
  • Maybe Windows just stresses out the hardware more and causes itself to crash because of that?

    Count this be because of the difference between programming in a theoretical environment(i.e. Redmond) and programming in a real-world environment (i.e. the real world)?

    Did ESR cover this in CaTB?
  • Almost every software package that I install on our NT boxes says "Please reboot to complete installation" at the end of it. This is standard, in my experience, and rebooting NT never hurts, while leaving it up often does. I hear that W2K doesn't need this, but until I install 50+ vendors products on it, I won't believe it.
  • Agreed. Linux has replaced Windows for me on the server and the desktop. I'm quite happy. I have found everything I need in the Linux world (except for Starcraft!...). What the hell do I care what everyone else uses?

    The only time I feel compelled to get into these arguments is with a boss that thinks Redmond is synonymous with Mt. Olympus. I don't usually work at places like that long.

  • I was recently at a Microsoft Partners function that was attended by three Windows 2000 developers. One of them discussed specifically the question of why Windows/IIS web servers needed to be rebooted so often. Here's (approximately) what he said: "We found that they weren't always rebooting because they needed to, but because they wanted it to happen under their control, not when the machine decided it was needed. When we examined the problems, we found most of them were within IIS itself, relating to locked files and non-terminating scripts. Under Windows 2000, IIS runs as a service that you can stop and start by itself without rebooting the OS, and you can schedule it to happen when you want."

    Which is actually - believe it or not - the same solution that Apache uses. It'll reboot itself now and then to reduce memory leaks.

    Talk about your bugless OSS software... Shouldn't those leaks be fixed instead?

    Simon
  • Remember, one of the major "selling points" of SP3 (aside from the fact that it replaced *shudder* SP2) was that it rebooted "50% faster". :)

    Which is actually very useful if you're using NT as a workstation, and you do environmentally conscious things like turning the machine off at night when no-one's using it...

    Simon
  • If any piece of hardware is going down every 90 minutes then the admin ought to be taken out and shot. I can't think of a single OS that can't be made more stable than that.

    It's pretty damned funny how people always say "I made a Linux box and replaced such and such a functionality of an unstable NT box" when in almost every case a properly configured NT box could easily do the job. I'd like to see some backed up testimonials about Linux in the enterprise much like you see testimonials about NT in the enterprise. Oddly enough all I ever see on Slashdot is some AC posting "I work for the largest company in the world, we have a gross income larger than the cumulative GNP of any fourteen countries you care to name, and we run linux exclusively from our secretaries typing away in vi to our most technologically advanced toilets, running PHP3/toiletd for automated flushing. And not a single box has ever crashed, our uptimes look like this:

    9:25am up 37,244 days, 3:38

    And that's the box I had to replace the motherboard on because Linux was only able to keep it running for six weeks while it was on fire."

    Come on people, stand up, say your names, be prepared to have legit journalists to approach you in the real world.

  • I've done similar apps using the technologies you mention. It should be better tha 90 minutes.

    Have you been monkeying around "tuning" the ODBC network params? That one's bit me plenty of times on NT with Sybase (and various platforms with Oracle). Being a long time UNIX guy, I have the attitude that parameters are to be tweaked. Sometimes, with large commercial software systems you're best off not tweaking until everything is stable, and then very carefully.
  • at the command line, getting crap like RTFM (when, in fact, there is no definitive M), and trying to configure ridiculously obfuscated network settings, I'm ready to go back to windows.

    You need to get over the hump; bro. A lot of your problem is probably learning where the docs are (they're there a plenty, just not in an expensively bound, nicely printed manual). You've also got to expect to pay some dues and learning how a different system works. It's like learning a new (human) language -- some of its logical, but a lot of its arbitrary; it's important to realize that your native language is pretty arbitrary too. It takes a while until you start thinking in the new language. Until that point, you will seem dumber than you are in your native language.

    That's what linux is missing, and may never have. In windows 9x, a few clicks and a reboot is all it takes to get a workstation on the network. It's basically the same with NT, just a little more technical, for control purposes. In linux, you have to ensure the the damned OS works with most of the hardware in your box, then play with text files all day until you think you've got it.

    Don't mix up your learning curve with the cost of installation. When you get into more of a production mode, Linux installs are really much easier. For example, you are complaining about using text files, but what kind of configuration could be easier than copying the relevant files, which in 99% of the cases can be identical? If you are doing a lot of them, you can simply script the whole thing; which beats having to check every few seconds on whether the computer is waiting for your click. Of course, as you point out, the hardest thing is going to be supporting oddball hardware. All hardware manufacturers provide Windows drivers, but Linux drivers are provided by people who want the device to work under Linux. The upshot is that you're in trouble if you like to bottom fish for a completely different set of the cheapest components you can find on every new box you do. So, go with quality hardware, and try to standardize your boxes; or at least check beforehand to see if the ISA slot modem you're installing is supported.

    but at least it has friendly support (you people could work on this one), copious amounts of software, and configures with just a few clicks of the mouse button

    Well, I for one have found the free Linux support more friendly and responsive than commercial tech support I'm paying good $$ for. But, you have to remember it's free. This means putting your sweat equity into fixing your own problem first, and then taking the effort writing a clear and concise description of what you are trying to do and where you got stuck, with the minimum amount of bitching about how stupid Linux must be because you already know how to do this on Windows. In other words, make it easy and pleasant for someone to help you.

    If you come to the table with a chip on your shoulder (as users of commercial support feel they are entitled to do), then you will get your rudeness shoved right back at you (you might not be aware that you are being rude, but ask anyone who's worked tech support).

    So brush up on the charm. It used to be that computers were a haven for people too socially maladjusted to function anywhere else. No more!

  • OK, here's where I look for documentation:

    (1) the "apropos" command, followed by the "man" command. The "apropos" command tells you the names of other commands whihc have the keyword you supply; the "man" command gives you the manual entry for a command. Sometimes you'll strike out here, for example trying to figure out what you need to configure Apache, "apropos apache" will give you lots of stuff you don't need, and none of what you do need. However, if you're wondering about how to track memory, "apropos memory" will tell you all the commands that mention memory, one of which is the very useful "vmstat" -- try "man vmstat" and you have a very detailed memory. With time, as you absorb Unix jargon, this gets better. For example, you might want to know about commands telling you what jobs to run, and strike out with "apropos jobs"; however, "apropos process" leads you to the very useful "ps" command (for "process status"). Man is where you go to find out all the bells and whistles for commands. Familiarize yourself with the command "grep", which will be useful in the next step.

    (2) the /usr/doc tree, where you'll find detailed descriptions of various software systems included in Linux. Especially, get to know /usr/doc/HOWTO/, which tells you how to do things like set up a DHCP server or how to turn your Linux box into a firewall. Think of this as Linux's answer to the "resource kit". Once you've mastered grep, and the "find" command, you'll find this a very fruitful place to look. Is working this way obscure? You betcha! But it works; almost always the answer you want is going to be found here, written in hacklish, to be sure, but its here.

    (3) the configuration files themelves offer guidance, once you figure out which ones to tweak from man and /usr/doc. These often contain documentation on what the various config file settings do, with examples all worked out -- just delete the '#' which turns the example into a comment. This is especially true with the sanmple Apache config files.

    (4) the home pages of the open source component being configured (e.g. www.apache.org). Note that Apache, for one, delivers its documentation as HTML files. A little problematic if you haven't got the web server running and you don't have a browser you know how to run installed on your Linux box. However, the documentation is, in a word, superb.

    (5) Books from O'Reilly with animals on the cover.
    Years ago, I learned Unix from osmosis, and a book called "The Unix Programming Environment". Any recommendations for NT converts? In keeping with the spirit of Unix documentation, I've listed this as 5 but this will be the first place you will want to look, unless even better you have a friend who's a Linux wiz.

    (6) The source code. I'm a programmer, but I don't much refer to the source code for most Linux programs unless I need to change it. However, it's nice to know the source is there and somebody could figure it out if they were sufficiently motivated. The answer to your problem is never a trade secret.
  • Well, no. Only your domain controllers have to be Win2k to run in native mode. And native mode doesn't really buy you most of the "strong points". What you probably meant to say is that you have to have Win2k on the desktop to get a lot of the nifty stuff. Which means that there's going to be a long, long transition time for most organizations.
  • The master process of Apache is not designed to be reboot all the time. Since Apache allow external modules so it is safe bit to assume that those modules can introduce memory leakage. I think it is clever design to let child process die when there is no much traffic.

    The other part of this is that it is possible to have a scheduling algorithm which disfavours older processes, especially those with accumulated CPU time.

  • Linux is a unix. in general most unix users inherently like other unixes. even if BSD would get lots of heat from linux (as they already do)..its still a unix. i dont really care who wins - as long as its a unix based derivative.
  • yup. actually solaris is fairly rock solid and ive never had it crash on me (same said for linux and OSF/1..unfortunately not for IRIX). solaris *does* make you reboot more often (things like replacing a keyboard on a sparc box causes it to go to the PROM and hang...and various other socket binding problems and other shit with javawebserver etc etc.
  • *cough* *cough* 95,98 and MacOS (until 9) all crash in their idle loop.
  • i would if it actually accepted keystrokes. unfortunately it doesnt seem to. so i cant type go :(. any other ideas ?
  • Yes... but win3.1/95 would invariably go down after a while... 1.2.13 may not have had the software support at the time, but it was really stable.

    I think that comparing NT to Linux in this test should be considered fair if they were asked to perform the same tasks over the testing span. Both are touted as web/file/print servers, so the comparison would be vaild...
  • If I ssh into my webserver to do some remote admin, and I segv linuxconf (as if I'd use it, but...), I can be safe in the knowledge that Apache is still running.
    Can you imagine what happens when one of the services that get loaded automaticly manifests some bizzair bug which crashes the RPC engine each time it loads.

    I've seen this, and by the way, all those fancy GUI admin tools, including the control panel's 'services' applet, use RPC in some form or another. I'd stop the guilty service from loading if only I could :)

    I wonder if the MICRO~1 engineers have ever heard of "single user mode"?

  • Which is actually - believe it or not - the same solution that Apache uses. It'll reboot itself now and then to reduce memory leaks.
    You seem to misunderstand the Apache structure. Apache does not run as a single process, but as a collection of proccesses, that are spawned as load demands. You can, if you wish, set an option that the child processes can shutdown after servicing some set number of requests. This does not affect Apache as a whole, as other processes are still running and servicing requests, and new processes will come on line as needed.

    The documents justify including this option by claiming that certain platforms have memory leaks in their libraries (Sun is singled out in the latest docs I've read). I'm not aware that the memory leaks originate in Apache itself. Nor am I aware that any open source OS's have problems.

  • "When you try something new. It wants you to reboot all the time."

    Actually, IMHE, there are only a few times when you need to reboot an NT box. Updating some required .dll's, changing hardware, etc.

    I know in the versions of InstallShield that I've used to build installs with, at the very end there is a checkbox asking you if the user should reboot their machine. The default is 'yes'. 9 times out of 10, if you know what is being installed, and there's usually a list hanging around, you don't have to reboot the machine.

    I do have problems with getting hardware to work on NT boxes, as well as some software, but only rarely is it NT's fault.

    BTW, I don't work for MS, I've just used their systems alot, and I do wish they worked better.
  • "So like if a new device were to come along, the manufacturer could write a driver without any NT source"

    This is because MS does not want to give their source every tom dick and harry who wants to write driver. I really don't see it as an advantage. You can get the source for Linux and can't get it for NT therefore they have different developement models.
  • "Not, how do you explain the thousands of drivers for NT?"

    The reason there are thousands of drivers for NT is because Ms has a monopoly not because the drivers are easier to develop. Also Nt has been around for a long time now and Linux is just starting to get popular. In a couple of years both will have the same number of drivers.
  • With 28,000 known (real) bugs before release and some unknown number of undiscovered bugs lurking in the software, how many do you think there really are?

    My guess is that there are more than one undiscovered real bug for each known bug. Even that (conservative) guess would put the minimum number of real bugs right around the 64K number.

    Realistically, however, there are likely to be far more than one unknown bug for each known one at this stage in W2K's life. But enough of my guesses. How would you estimate the number of real bugs likely to be lurking in W2K?
  • No, it's not time travel, although that is my hobby.

    If you care to read my comment properly you'll see that the article I was referring to was A DIFFERENT ARTICLE about the SAME BLOOR RESEARCH REPORT.

    It was from the Computer Weekly magazine website, and was published many, many months ago. The article in the main story may well be new, but it's old news.
  • I submitted this as a Slashdot story about a year ago, alongside a different computer journal article from people that actually got fresh news, rather than dead meat.

    Possibly why the story was rejected then was because Bloor Research could have said *ANYTHING* about Linux - you have to pay to read the report. Other institutions (VNU Labs, Ziff Davis) have produced just as important research, made it freely available through various media formats, and are probably as reputable, if not MORE reputable, than Bloor Research.

    It seems Bloor Research (who heard of them before today) have got more promotion from simply doing ANY report on Linux, than they have created for themselves in the past. Yup, they're just jumping on that old bandwagon.
  • A competent person or organization would have removed windows before the 68th reboot. And probably before the first, too.
  • Look, why do we always insist on arguing about silly things like this. Emacs vs. ViM, Mutt vs. Pine, Linux vs. everything. Goddamit! IMHO you should tell people your likes and dislikes about a particular package, and then let them decide. It is hardly productive to flame. That makes people resentful. Linux will never replace windows on the desktop, but that's o.k. It is meant for other things just as ViM is meant for different things than emacsen. Don't you guys ever get tired of these old arguments?
  • Wait till it's up against win2k, 99% odd problems!
  • I am troubled by the denial-of-service attacks against Yahoo and others this month.

    I read a statement that the attackers were obviously knowledgeable about both Unix and networks.

    That suggests to me that the attackers were able to plant their zombie programs on Unix machines but not on NT ones.

    I think that Linux is superior because it is GPL's and one rarely if ever needs to buy software for it.

    But I am not sure that Linux or Unix are more secure than NT. The current denial-of-service attacks which apparently exploit Unix security holes suggests that Unix and therefore Linux may NOT be more secure than NT.

  • NT - Literally, New Technology.

    j.
  • I don't agree with that. Hardware problems are hardware problems. Chances are Linux got away with it cause Linux never bothered to use any advanced features of the hardware. For example, there are tonnes of IDE controllers and chipset drivers that come from various manufacturers for NT. Now, I'm not saying this is a software/driver problem, what I'm saying is that these drivers could for example do some special new fangled call to the hardware, that could cause the hardware to lock up, get into an unstable state etc. Windows tends to be more 'bleeding' edge because of the huge hardware manufacturer support, while Linux most of the time runs vanilla chipset drivers. That's just one example, but i think it's a valid one.
  • Explain, what is a default kernel in linux "moron".
    NT kernel is optimized all the time, we have ring0 drivers, vxds etc. We don't have a monolithic kernel you know. That's what this thread was about...not installing the right drivers. There's this amazing thing, we don't have to recompile the kernel to remove a feature!
  • NT has a very nice driver model. So like if a new device were to come along, the manufacturer could write a driver without any NT source. As an example (this is 2.0.x) I had to recompile with ipfw options to get ipmasq and ipfwadm to work. with win95 and an unmodified kernel, you can write NAT. Anyway....my point was not to say that Linux "can't" do these things, but was to say that just cause you don't have NT source doesn't mean you can't "optimize" the "kernel/drivers". I know Linux has kernel modules.
  • Um, so you're saying that Linux doesn't need a decent driver model and abstract because they can always get the source? Uh no. Manufacturers just want a nice DDK, read the docs, get some examples, and do it.
  • Not, how do you explain the thousands of drivers for NT?
    Why don't you look at the DDK for once, much nicer than what linux has...which is take the code and if you can't do it you're a stupid idiot.
  • Often, When one product is behind in market-share (micros~1 in server space) they will attempt compare and contrast their product to the leader in that market. this is a common marketing tactic in any market, with any product.

    MS has been claiming unix is dead for more than 15 years, and they continue to try and convince people that they must make a choice between Linux/Unix and NT.

    The real issue here is conectivity and interoperablility. When comparing OSs, look for one that "plays well with each other"(netBSD vs free BSD vs Redhat). micros~1 has spent millions and gone *way* out of their way, time and time again, to put barriers between unix and windows, then marketing the differance in the marketplace, asking users to "choose". This fucked-up aproach adds to the TCO (Total cost of ownership) of window~1

    So, the short answer to the question is both operating systems have their place.

    1)window~1: clueless newbies
    2)unix: serious internetworking

    The real question is which unix is best for me?
    _________________________

  • What kind of crap hardware were they running? Not to necessarily defend NT, but it sounds like many of the problems could have been related to whatever hardware they picked.


    --

  • I wonder if the 65 hours of downtime included the weekly reboots recommended for proper NT system maintainance?

    Adding up 52 of these would certainly add a little more downtime.

  • No, but it doesn't matter, because Linux won.

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...