Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Linux Software

NT vs. Linux: Again 816

Jeff Molloy writes "The results are here link " It's a shame Linux didn't win, but it looks like the tests show where Linux might have some deficiencies. Overall, it looks better than the original test, though.
This discussion has been archived. No new comments can be posted.

NT vs. Linux: Again

Comments Filter:
  • Wouldn't more people benefit from MS management seeing to it that some QA/testing people get hired?

    How about fewer new "features" and better implementation of those that are already there?

  • 99.9% uptime would be an MTBF of a bit over 40 days. Not exactly space-shot quality, but acceptable for e-commerce. It will have to do better to work in a telco CO, but I would bet that NT-embedded, with the unneeded bits removed, would do pretty well. It looks as if each OS has its own deficiencies, among them: NT clustering is way behind, and Linux has some key kernel bottlenecks for high volume transaction applications. Is it that surprising to find that operating systems put their pants on one leg at a time?
  • >Currently the VC++ from microsoft produces far superior x86 code than GCC. I'm surprized that nobody has commented on this fact yet.

    Why should anyone? VC++ is basically a one-platform solution (x86) while GCC pretty much runs on everything under the sun. GCC runs on the Amiga. Does VC++? Nope. When you are dealing with a complier that runs on multiple platforms/processors the kind of optimization you are talking about can be a *REAL* headache to deal with and should be steered clear of...
  • It's a strategy for choosing which requests to serve and which requests to ignore for the time being- and if this is so hard to understand, I imagine the tested version of NT _is_ doing this because (a) Microsoft people are _not_ stupid, and (b) Microsoft people will always cheat given the opportunity
    This is a pyrrhic cheat- you can't use it on a real web server. It has nothing to do with CPU scheduling and is purely a hack to optimize benchmarks for intranet requests

    How do you know MS are doing this (cheating) - can't you just say, oh, well, our tcp/ip stack needs to be multithreaded. It seems like you are trying to mislead - introducing little hints here and there that this was all faked.
    and that CPU scheduling is the only consideration in doing this, because the only algorithm that exists is serve-upon-request?

    In that case, it would be interesting to see Linux against NT running a different web server. We've already seen that the bottleneck exists in Linux, even when using a different web server. Certainly, if we were to see that say, solaris kicked NT and Linux's ass, you wouldn't suggest it had something to do with sun running round cheating.
    Since it was shown (dispute it as you wish) that Linux's bottleneck is it's tcp/ip stack - I don't see that your argument about algorithms has any relevance in this thread.
  • I've heard a fair number of arguments here claiming the test to be unfair due to the fact that the "hardware was chosen for an NT advantage".

    I'll neither agree nor disagree with that as I have no knowledge in the subject. What I will do is to offer this thought:

    If this hardware really was where NT shines, what happens when linux gets tweaked to take better advantage of it? The Microsoft folks have nowhere else to go.

    So, I say to you folks, take heart. Accept this setback for it is not defeat. Remember, that which does not kill us...

  • I don't care how sunny and bright and beautiful your little M$ world is. Enjoy your job fucking people over for a living and being part of one of the most greedy fucked up organizations in the world.
    Why don't you just stay at winfiles.com and stay the fuck away from slashdot?
  • I remember a project where a person did a clean install of NT on a totally fresh harddisk, put it in a room by itself, unplugged the keyboard, mouse, network, etc. and did basically NOTHING on it. Guess what? It crashed after 53 days (or something around that). As long as you don't use an unstable kernel driver, and don't do everything as root (as some people appear to do, I must admit I do it myself...), Linux is dead stable. At least it won't crash doing nothing!

    /* Steinar */

  • Microsoft will just keep inventing benchmarks that happen to
    make NT look better. Nothing can be done about it, other
    than observing that those benchmarks will become less
    realistic every month.

    The only way arround this would be for Linux (Apache and
    Samba) to copy the same "unapproved benchmark veto"
    clause which makes the publication of truly independant
    benchmarks unlikely.
  • I'd rather do it with a human being(female of course). I wouldn't trust a guy touching my internal organs
  • Very interesting comment.

    All I'm saying is that saying that Linux is free and NT costs $$$ is NOT a very good argument.
  • >Linux uses the forked process model to provide services to multiple users. This modem achieves stability in that if one process dies, the others continue as if nothing had happened. Both Apache and SAMBA operate in this way I believe.

    >NT has chosen performance over stability.

    I wouldn't put it that way... Perhaps a better way to phrase it would be: Linux has chosen a pessisimistic approach to application stability over general performance.

    Linux's use of processes vs threads only has merit if you assume that the processes you are running have bugs (and will crash). It really seems to suit the open-source model to be more optimistic concerning application code and give it the benefit of the doubt along with a hefty performance boost (in the form of threading). Does anyone doubt that Apache or Zeus could use the thread model, remain stable, and thereby match NT's performance?

  • Many of us met and listened to Redhat last night at the Miami roadshow. They accepted the benchmarks as accurate. They also reminded us to be like Linus and keep a sense of humor and pespective about the whole thing.

    To paraphrase Mr. Torvalds....

    Microsoft is just being a good Linux user and reporting bugs. The same can be said about ZDnet.

    Getting your butt kicked every once in awhile (metaphorically speaking) can be a good thing. It keeps one from becoming arrogant and complacent. It can motivate you to do better and try harder.

    Just think how bad American cars would still be if the Japanese hadn't come into the market. (Not to say they are the best, but they are a hell of a lot better than they were 10 years ago).
  • Last night at the Miami Redhat roadshow, questions were asked about the Mindcraft benchmarks.

    They said one very important thing. Mindcraft was not able to duplicate their results.

    About the the ZDnet benchmarks. They happen to agree with them, but not in a negative way.

    They reminded us to be like Linus who has kept a sence of humor and perspective about all this.

    Accoring to the folks at redhat, Linus said....

    "Mirosoft is just being a good Linux user and reporting bugs. You can say the same about ZDNet."

    I beleive that ZDnet was trying to be as fair as possible. They firmly believe in the future of Linux, and have stated that publicly several times. They are doing their part in helping it to become a better OS through constructive critisism.

    IMHO MS is going to lose out in the long run as long as the Linux community remains honest about it's shortcomings. No multimillion dollar spin masters to hide the warts. No FUD. Just keep getting better and better, and Linux will win.
  • Things have changed after these Linux-NT test ...
    I would never expect anyone on Slashdot to write "
    If raw speed is your monkey, then NT is the tool. "

    Hehehe... Well, but we can't dispute that. Right know NT is faster.
  • I run W2K 24x7 as a desktop OS for development. The only crashes I've had were due to an immature NVidia TNT driver (video drivers bypass the HAL and can therefore bring the system down). I've recently installed NVidia's updated driver and the crashes have disappeared. I think M$ may yet pull it off w/ W2K -- at least they'll put out a kernel (VMM, FS, net stack, etc.) that will be stable and high performance.

  • Great enthusiasm - You sound like Steve Balmer :) The problem here is that you're wrong on several key points

    LOL ;)
    1. Win2K's interface is not improved. It sucks. NT4 was good. I get paid to admin NT4. I like NT4. Win2k is a major step backwards in usability. The ungodly number of wizards in NT5 (oops win2k) makes it impossible to do any real work. Sure you can turn them off, but the mere sight of them drives me nuts - it's like having 4000 of those fscking dancing paperclips. This is supposed to be a server os - wizards don't belong on a server os.

    I have to disagree there. I think the W2K gui a slight improvement (GUI here, not tools). The new MMC is great. You can administer everything from one program (including add devices, read event logs, add users, make shares etc). What's more, you can do it to remote machines...seemlessly.

    2. Win2k's performance. This sucks too. Win2k takes ages to boot. Once it's up, using office 2k takes far longer than NT4 + Office97 ever did. My box is a PII 450 128Mb of RAM, I know that's not enough for the 2k products, but the company won't splurge for an upgrade.

    I think it's wonderful!!! My W2K box boots in no time, and it boots in even less time when i use the cool new hibernate (memory to disk) feature. yummy. I'm running a K6-200 with 192MB ram (did have 64MB, but it was a bit sluggish with all the services installed).
    3. Stability. This is anecdotal, but I've had more lockups (5) and blue screens (1) with NT5 than I had on the same box with NT4 (3)lock and (0)BSOD - Admitedly it's still in beta

    Wow, I haven't had any bluescreens except one where i installed an unsigned NT4 driver i was warned not to install..after that, everything else was perfect...been running for weeks with no problems with BSODs (it's more purple now tho :P).

    4. Ease of development. There is a special place in the most fiery pit of hell for someone who names a function RegisterServiceCtrlHandlerW() Don't tell me that Win32 makes life easier for developers. It spawns carpal tunnel is what it does Again i disagree, Windows is the most develop friendly OS...even 90% of *those* java developers use Windows. It's got brilliant IDEs, which make up for the long API names, but remember, VC++ has intellisense, so you don't have to spend too much time typing, or going round documentation trying to remember what arguments you need to pass.
    I'd glady have long function names, than horrible IDEs without intellisense! Besides, RegisterServiceCtrlHandlerW makes perfect sense ;) I presume the W is for widechars.
    as for your experience at MS...uh, ;) *side note* Microsoft is such a fascinating company to follow...it has such interesting people, like balmer, gates, allen (who has dissapeared off the face of the world recently) etc...a bunch of geeks (some more than others) becoming billionaires.
  • Its time us non-M$-haters come out of the wood-work on /. Don't get me wrong... I love the open-source movement and approach.

    But meanwhile though I run W2K beta 3 as a development system to be productive and it does everything I need it to do, quickly and without bugs or crashes. Harrumph.

  • I do remember. Microsoft was right about that one...
  • Actually, Microsoft does not fear re-use of old PCs with Linux. If anything, this reduces the culture of software piracy in the developing world. This is also a tiny subculture compared with 10 million new PCs manufactured every month (more than TVs now). This is, as far as I can tell, the central story of Linux. It has been covered. And it is well known by most people. Microsoft should fear, and does fear, anything that might catch up to them in terms of the total value proposition, including performance, reliability, capabilities, available applications, etc. in markets where money can be made.
  • Nah, that's the wrong way to go... Notice that the 'low-end' system was still spankin' new hardware which probably ran well into four digits. The lovely thing about Linux is that you can run a web server (not slashdot, I'm sure, but a web server) on a sub-$1k box. Obsolete hardware is obsolete no more. _This_ is Linux's main source of strength.

    Of course, that doesn't mean it wouldn't hurt to beef up the scalability, and that's what's planned for 2.4/3.0 anyhoo...

    -grendel drago
  • I may have disliked the tone of the Mindcraft test, but they did provide the documents for the entire test configuration. For personal satisfaction, when can we see the equivalent documents from this PC Week test ? I don't think PC Week is hoilding this information back for any other reason than they forgot how intrested most of us are in the details". PC Week... publish the details. Please. :)
  • Okay, try Interbase. I'ts much faster, and works on Linux/NT and others.
  • "...I think that it's safe to say that noone in the world gets more than 150 million hits per day of static content..."
    "...Oh, if you're serving up >1800 files per second of 2k files, who are you?..."

    Flying Crocodile, Inc. (www.flyingcroc.com). We have FreeBSD/Apache machines running at 27Mb/s. This is on off the shelf Pent-II boxes, ok, the have 1GB of ram and UW-SCSI drives... but single cpu boards. With the new FreeBSD 3.2-RELEASE the above mentioned box runs with a load of 3.3 and over half the cpu idle.

    The numbers that you talk about are the numbers that I deal with everyday. We would never think of running NT. With over 130 servers we couldn't afford the massive staff to sit around and reboot the boxes all day and night... that and I would hate to have to wire a monitor/keyboard/mouse to each box!

    The servers together do over 145M hits per day and thank god the Cisco GSR12008 is shipping next week, the three 7507's are hammered!

    "...Oh, one more thing. If this is all on an intranet, you'll still need Gigabit ethernet if you're serving up the 10k+ files..."

    More like 2 GE to Frontier and Teleglobe, with various other T3's get the job done. BTW: before you start crunching numbers, not all the 130 servers are cranking 27Mb/s, many are doing massive database work.

    Another interesting number, at our peak in the day we route about 79,000 packets per second, figure that about 1/4 of those are http requests. Peak total for today was 419Mb/s using mrtg.

    IMHO: I use to be a Linux nut, still use it for desktop work, but FreeBSD kicks ass when it comes to serving. If you think my numbers are crazy, Yahoo trucks twice the bandwidth, no wonder they use FreeBSD too.

    The tests have been done by those who's bussiness is to crank out the hits. Most use neither NT nor Linux.

    If you doubt this post, just do a looking-glass on the connections exist.
  • And the emperor (*place-any-ms-competitor-who-is-winging-to-the-go verment-here*) was really the bad guy.
  • It did mention that they did try zeus, the other web server suggested by linux advocates. But the cut off point was yet again present. The source of the performance block was the tcp/ip stack, not the web server.
  • HAHAHA!!!! yeah.

    at my UNM, just the SRC computer pod with 16 clients seems somewhat confused if you do something rash on one of them, like dare to open netscape or something. NT sucks.
  • I get 40 days (41+2/3) for MTBF only assuming it will then be down for a whole hour: not a valid assumption I guess. Assuming a 5 minutes downtime results in the 3-4 days mentioned earlier.
  • Yes yes... this test did not consider every possible variable for every conceivable hardware setup for all time for all people etc. etc...

    My point is this test only set out to show that given a certain hardware setup with excellent *theoretic* performance for an interesting task (web and file serving) the OS and application setup that gives the best *actual* performance is XYZ. In this particular case XYZ happened to be NT with IIS.

    There are a significant number of supplementary issues that any potential OS customer must consider in addition to the information derived from the results of this test. That should in no way detract from the importance of the results of this test.

    And importantly, as members and potential contributors to the open-source movement the results of this test give us an excellent report card on the progress of Linux development. I really don't think the test of the results should be excused away for any reason.

  • Multithreading is a good design -- sometimes. Multi-tasking is a good design too -- sometimes.

    When two or more tasks legititmately belong in the same address space (e.g. must manipulate the same objects in memory), sure, multithreading is the way to go.

    When you're just kludging a substitute for fork() because spawning processes is too expensive on your architecture, that's not so good from a stability or security standpoint, although spawning new threads.

    None of this has anything to do with lack of multithreading in the Linux IP stack, which undoubtedly would be a good thing.
  • Well it was almost 3 years ago, but bsdi did this.
    Tests were done on P133's with 64M ram and BSDi
    walked all over NT.


    Would be nice to see a rematch.
  • I just read a Web-Benchmark in c't (http://www.heise.de).

    They used a SMP-Patch. And then Linux was in every test at least as fast as NT, if there is only one network-card in the computer. If they used two network cards NT was about 100% faster than Linux. NT was very, very slow when using Perl-scripts.

    They showed also a test on Mac OS X. The results shows very good performance of the Mac OS X although the Mac hardware was no comparable with the hardware used for NT and Linux system (i.e. 128MB RAM in Mac and 2GB in NT/Linux-Server). But it seems, that Mac OS X has a bug which causes a "system panic" when using special CGI-Scripts.
  • Been working with NT (unfortunately) for five years now. SMB is integrated with the Server Service, which is in kernel-land. This explains the speed.
  • They claimed that if the Linux box were tweaked, Linux
    would win... which it did not.
    Has the full configuration and "treak list" been published at
    all. AFAIK only the fact that the Linux team were specifically
    forbidden from performing certain tweaks.

    Also were any of the tests carried out with ordinary NT
    which is a far better match to RedHat 6 than NT Enterprise...

    Or maybe someone should have got them to build a
    "RedHat Enterprise" everything compiled for these
    high end machines. (And incapable of running on
    low end machines.)

    Another of the original issues was logging, if this is
    being done properly then every SMB or HTTP connection
    will generate a synchronous write to disk. This will
    slow things down. AFAIK NT dosn't log anything to do
    with file shareing by default. In the original tests IIS was
    placed in a mode of buffering the logging information
    and writing in chunks. (In the real world you may as well
    turn off the logging all together as use this option.) Did
    Apache and Samba have all logging turned off (or
    directed to /dev/null)?
  • Hi folks!

    I think one has to accept that at the moment NT is slightly faster considering the maximum output of a web or file server.

    But as German PC weekly "ct" found in their own benchmarks (issue 13/99, pp. 186), Linux is still a very good choice under real world conditions. They tested SuSE Linux 6.1 and NT 4.0 SP4 on a 4-XEON-450 Siemens machine. The main difference between their configuration and the Mindcraft one was that they just had one Ethernet card (instead of FOUR!)in the system.

    They said it was not realistic (except for a few Intranets maybe) that a web server has to serve more than 100 MBit/s or even more than 10 MBit/s. Under these circumstances Linux was slightly faster with static web pages and much faster with serving CGI. However, ct didnt test MS IIS with ASP (hard to find a fair benchmark between Perl/CGI and ASP anyway).

    Only when they tested the system with a second Ethernet card, simulating similar loads to the ones in the Mindcraft tests, NT was significantly better (and scaled the CPUs much better than Linux.

    What they also found out is that NT was much worse with serving from the HD instead of the memory (maybe because they also used one big partition instead of smaller ones, which seems to slow down NTFS. The bottom line: Linux with Apache is a very suitable and fast system for real-world (mid-size) web serving needs, mainly if you have to deal with a lot of dynamic pages (like on Slashdot ...), but the main findings of the Mindcraft study are true under the given test circumstances.
  • Posted by FascDot Killed My Previous Use:

    Why aren't they back-porting that multi-threaded IP bug thing to 2.2?
    Put Hemos through English 101!
  • OK Kernel guys (And people who have setups who can
    test this sort of thing). We now know where the problems are (OK We already did) Lets fix them and then challenge for a rematch.
  • The obvious conclusion to all of this is that MS is scared. Very scared. They want to get every ounce of performance out of NT and did so to do better in the SAMBA tests.

    I think Linux will come out ahead in the long run. It's only a matter of time.

  • Microsoft has claimed this round, but I believe that in the coming years Linux will begin to outshine NT/2000 in many respects. Remember that NT has been running an SMP kernel for several years and has gone through several service packs to get to where it is today. Thus it is no great surprise that Linux loses on a beefy SMP box.

    It does disturb me somewhat to see that Linux loses on the single proc box, but this seems to come down to the tuning. Out of the box, Linux is the faster (as other benchmarks have illustrated), but when tuned, NT is better.

    I think they ought to make this an annual competition and see how they match up every year. I bet next year the results won't be so slanted in MS's favor.


  • by Elmo ( 7644 ) on Friday June 25, 1999 @12:45PM (#1832113) Homepage
    Before everyone starts flaming ZD and yelling foul play...again, why don't we actually do something about it.

    If you can't help program then go out and test all this new stuff and send in bug reports. Let's have Linux set the standard again. It seems like, acording to the article, it was this way once and we lost it because Microsoft has pushed the bar a little higher and we lagged behind.

  • IIRC Linux once said:
    "Linux has a micro kernel. There are only these things in it, which are needed".
  • your argument would carry a lot more weight if you didn't hide behind the "AC" posting. Register @ Slashdot and people might then read what you say.

  • Unfortunatly, I'm wasn't referring to anything so fancy. More just being sarcastic, because the throughput difference between Apache and IIS is hardly ever going to be the deciding factor.

    (In the largest NT/IIS setup I've seen, there were three actual web servers. They were 'clustered' only on the switch level. The assumption was that one of servers would be down at any given point in time. A desktop box was running software which checked if IIS was running, and if it had died, attempted to restart the service. If that failed, it rebooted the box.)
  • PR can eat itself. Linux is uniquely able to ignore the PR wars and win in spite of them.

    At the end of the day, the smart companies have only two questions about IS technology:

    1: Can I do more with this?

    2: Can I do the same job cheaper with this?

    All the other numbers are indirect data, trash talk. Management--especially smart management, doesn't directly care about MIPS, MTBF, or benchmark numbers. They care about the two questions above, and care about the other numbers indirectly because those numbers tend to be good predictors of the answers to the real questions. In this business, when almost everything is potential, these early indicators are very important, because you can't get good answers to the top two questions.

    You have the same thing in sports. You can measure free-throw percentage, height, weight, slugging averate, save percentage, and a host of other details. But at the end of the day, only one question matters into it: How often do you win? All the rest are trash-talk numbers--good predictors, but not the bottom line.

    In sports and business, you have to have those trash-talk numbers for people to give you a chance. If you weigh a trim 175lb, nobody in their right mind is going to make a nose tackle out of you--you won't get the chance to show the coach that you can topple the 325lb center. If a product has enough benchmarks damning it, the vendor will pull support and recoup its losses.

    This is why Linux can ignore the trash-talk and go straight to increasing capabilities and lowering costs. Linux isn't a business; vendors cannot cut all support. Nobody has the power to tell Linux that it cannot enter the IS world. It can't get cut, and can only get discontinued if every Linux geek in Creation decides to spontaneously drop it. Red Hat and Caldera can go belly-up, Torvalds and Cox could be swallowed up in earthquakes, and Linux will keep on existing.

    So long as Linux exists, it can win. With the development advantages it has, it can win well. It needs a foothold in some IS shops; it's getting that, or has already gotten that.

    If Linux wins, it is going to start by revolutionizing an IS department. Some big gun like AOL will see the potential and let it start taking over the infrastructure. It will work. Forget the runtime, forget the performance, it will do the job for cheaper. In the business world, such success gets copied. People look at the company that pulls this off, ask how they do it, and see a room full of Linux boxen.

    The IT budget will convince more smart managers than any amount of benchmarking will.

    PR is still relevant, but only in the short term. Good or bad PR can accelerate or slow the rate of Linux installation. In the long term, however, the success of Linux will have nothing to do with the benchmark numbers and have everything to do with the budget numbers. If Linux can do the job cheaper, it will win. If it can't, it will remain a hobby OS.

    But the good news is that, unlike a corporate product, short term effects cannot destroy the long term picture. Linux will have all the time it needs to fit into the corporate structure to its best abilities.

  • Wow... It's taking Microsoft, 4CPU's, 4Gb of RAM, high performance RAID controllers, quad NICs and a team of MS's top software engineers dedicated to tuning the system to beat Linux.

    Who in real life will have this kind of hardware and get that kind of support from Microsoft?

    Boy are Microsoft in trouble! :)
  • Whilst the test have highlighted some 'limitations' of Linux in a lab, and whilst this looks bad for Linux, and is something that we the Open Source Community should address seriously if we wish to gain widespread acceptance in the business world, it really fails to represent reality. My company is using a Dual P90 as it's main webserver, not a quad P500. If we got 1000 hits a second on our webpages we would probably die of heart failure. I saw a report that suggested that even the Charles Shwab e-trade site only gets aprox. 642 hits a second. Looking at it one way, the test showed that Linux for $0 is perfectly capable of performing the same job as NT for $1500 (once you add in mail etc..). Whilst NT might be faster in _HUGE_ volume tests, in the real world, there is probably little or no difference except in the area of stability.

    However... for very small organisations, I run an ftp server, web server internal DNS, NIS, SMB (there is one Win95 machine) etc... for a small network comprised almost entirely of old 486s w/16MB mem, and 400MB HDs, Linux is the _only_ choice. NT wouldnt run on these machines and 95 isnt pretty! My home LAN which cost me less than $300 for 6 machines hub cables and all, plus $1600 for my main system (which I bought new and is now rather an outdated P166 w/8.4GHD+3.5GHD 64MB RAM) (7 machines in total). It performs great for my needs. If I were a small business, I think that I would have to think twice or three times before outlaying large sums of money to M$ for a system that was so over my needs, instead of using a system that would cost me so little.

    Linux offers computing 'solutions' where NT offers computing 'problems'.

    Bang/$, Linux will always win. Cost of upgrade since Linux 1.x software and all... $0

    Cost of upgrade since DOS for M$ softward and all?...
    anyone care to speculate?

    Do we need to improve Linux' high end performance just for the sake of benchmarks? possibly not, but It wouldnt hurt.

  • I don't know if this helps, but I've been able to trace a couple "solid lock" NT problems to SCSI cabling problems. One of these was on a new Dell server that shipped with a loose cable. NT doesn't seem to handle SCSI issues very well.
  • It's so nice of Microsoft to pay for this apache advertising. Just as a point of reference, 1800 hits/sec is the same as 155,520,000 hits/day. I think that it's safe to say that noone in the world gets more than 150 million hits per day of static content. Wait, there's a better way:

    1800 hits/sec * average 2k/hit * 8192 kbits/kbyte = 29,491,200 bits/sec, or 29.5 MBits/sec. What's that now, a T3 line? I know that a T1 line is 1.5 MBits/sec. Ok, so apache on one of these boxes can fille the equivalent of 19.6 T1 lines by itself. If (a bit more realistically, how many 2k files get those types of hits) those are just 10k files (let's not get into pictures), that's 147.5 MBits/sec, more than filling a T3 line, IIRC, and definitely filling up aapr. 98.3 T1 lines.

    What's the problem with Linux/Apache, now?

    May I suggest, if you can afford this sort of bandwidth, that you buy one of those 32CPU sun E10000 servers and call it a day? (or a server farm of linux boxes, since you're serving up static files.)

    Oh, if you're serving up >1800 files per second of 2k files, who are you?

    Oh, one more thing. If this is all on an intranet, you'll still need Gigabit ethernet if you're serving up the 10k+ files, so the sun box still applies to you.
  • by Hrunting ( 2191 ) on Friday June 25, 1999 @02:24PM (#1832297) Homepage
    All of your stuff is completely relative.

    It's all related to how much you pay your admins and how well they administer your system. This isn't a function of the OS. Yes, Linux costs less out of the box, but an NT admin is going to have a harder time (and thus charge more) to set it up than he is an NT system. If a business currently has functioning NT systems and competent NT sysadmins, why should they switch to Linux?

    How many small businesses who are choosing between Linux and NT need to, want to, or care about the ability to cluster? People who care about this benchmark are not the same people who need to run clusters.

    Other Hardware Configurations
    How much would it cost for a company to build a Linux-happy system? Most systems built today (and the systems that we want Linux to run on) are built for Microsoft. You'd need a custom-built, custom-designed solution to truly grab all of Linux's power, and that costs money, either in man-hours or purchasing power. The results of this test would've been far more atypical if they had built both machines finely tuned for Linux. At least this time around, they weren't blatantly geared towards Microsoft.

    Security, I'd say, is 75% system administration and 25% OS. Linux has its security problems as well, most of which can be plugged up with effective network management. Many of NT's can, too. MS may be a lot more apathetic to security concerns, but they don't run the systems, they sell them. I don't consider Linux or NT any more secure than the other.

    Stability can be completely a function of management. I've heard stories of Linux systems stay up for months or years. Guess what, I've heard the same stories about NT as well. I've also heard stories about unstable Linux systems. I've seen no long term studies done on system stability, so everything I hear about stability I file away under anecdotal evidence, not hard verifiable data.

    Change real world needs
    What's good for the goose is good for the gander. I don't see how this benefits Linux. Change the system and, whoa, Linux might perform worse under that setup. It happens to both types of OSes, and before you say, "It happens to Linux less!" find some hard data, not stories.

    The Future
    Past trends do not determine future performance. I doubt Linux will keep up its 212%/year growth and Linus has already said that upgrades aren't going to be as drastic as 2.0 to 2.2. Don't assume that Linux will advance in the next three years as it has in the past three years.

  • Perhaps, while you are creating your new law, you would take the time to spell "Beowulf" correctly? I usually avoid spelling corrections (although lord that's difficult some days) but if you're going to call someone a "dumbass" and a "moron" then perhaps you should be concerned with how you appear as well.

    This spelling flame contains no tyops :)

  • It is no surprise to see an AC posting such an article. I think that it was just a couple of days ago when I saw some of the stuff that was posted on Mindcrafts site (linked to by /.) where people were bad-mouthing them because of the tests. It is posts like this that give the Linux community a bad name. It is one thing to call a foul, but it is another to act like a immature jerk and start bad mouthing a test that was conducted in a much more controlled environment. I think that ZD would have learned from Mindcrafts mistakes and corrected (most) of them. They are people too, and they DO make mistakes. Linux is not perfect, yes it is better than anything that has come out of Redmond...but it still has some maturation. Not unlike some of the people who post here. A true sign of maturity within the Linux Community would be to accept the loss _GRACEFULLY_ and go out and build a better OS. Then the next time that there is a test we can go out and show eveyone who is best instead of sitting on the sidelines pointing our fingers at the referees. Take the loss, no matter how hard that it is, and make what you feel so strongly about BETTER. Bitching about it ain't gonna build a better OS.
  • Although Linux doesn't have a PR agency, there's still the same kind of spin happening in this thread. Which (among the 2+ responses) is "MS has sacrificed stability/quality for performance". Lots of discussion about bypassing the HAL, super-secret internal MS interfaces, etc...

    But we need to be careful. If you'll note, the article notes that Samba won in the earlier SMB tests tests because there was a performance hit in NT due to the transaction log. Which is a stability / robustness feature that Linux simply lacks, and would be better off having if availability and fault-tolerance are the primary design goal.

    We're treading on dangerous ground... PR is like a game of chess, and the community needs to be careful about spouting out this kind of spin which can quickly become a rallying point and then proven foolish if it isn't well-though through.

  • You're economic argument doesn't "scale" beyond small business however.

    Here's why -- Any company with more than a few hundred seats has a site licence contract with Microsoft. The cost is much more dependant on client seats then number of servers. This is to cover the client OSes and MS Office.

    The cost of extending the contract to add a few additional NT servers to the mix is miniscule. Compare this to the cost of hiring capable Unix admins, and for any medium sized business, you're not saving any money with Linux.
  • > What is this, stream of consciousness writing?

    It's entirely possible some people speak English as a second language.
  • Maybe you didn't read the graph right. That's a 1-processor NT box that tied the 4-processor linux box.

  • Yes, Kurt, you hit the nail on the head.

    There are probably a hundred (a thousand?) /.'ers who could take Linux and Apache and merge Apache into kernel space, compromise everything else, and personally release an OS/Web Server combination that could easily beat NT in Webbench. The same goes for Linux/Samba. What they would have created is an O/S with dedicated application functionality.

    If Micros~1 really wants to beat Linux in general purpose operating system performance, they need to take this approach with *all* other applications. Start by integrating BackOffice, the rest of IIS, IE, Office (why restrict this brilliant strategy to server-only apps? MS should surely strive for the fastest desktop also) and their other in-house applications into the kernel. Then they will FLY!

    Of course, *some* of this is actually good from an engineering perspective. Common functions that are essential to the performance of standard and widely used services -- and can be significantly improved by moving them into kernel space -- may justify this approach. Large chunks of application-specific functionality, however, will weigh down non-users of those apps and compromise stability for those who do use it.

    Realisticly, what I think MS has done here is create a "benchmark special". They have picked two high-profile applications and integrated them into the kernel a little too intimately so they can claim that NT in general is faster than Linux. The actual usefulness, of the web server speed up anyway, is questionable. Do *any* sites actually serve that many static pages? And, how many of those sites can afford the instability that such approaches bring?

    Sorry, Microsoft. What you have created is an NT/Web server/file server combination that is faster than Linux in those same areas. That does not make NT the faster operating system -- and it most certainly doesn't make it the better operating system. Meanwhile, you have pointed out what are now high-profile areas of minor weakness in Linux performance. Those will be fixed -- and fixed correctly. Thanks.
  • by esacevets ( 26712 ) on Friday June 25, 1999 @04:14PM (#1832385) Homepage
    From CMP's "Information Week" June 21, 1999:

    "When every minute of downtime can mean millions of dollars in lost revenue, companies generally rely on applications that run on OS/390, Tandem NonStop Kernel, Digital OpenVMS, or Unix operating systems. But Windows NT is increasingly being deployed... so IT managers must find ways to increase the availability of their NT environments. To do it, they're adopting products and services that promise to provide extra protection..."

    " 'Any system with lag time is unacceptable for running the application' says William Harris, NT Administrator for the Ohio Utilities. 'Money wasn't even a big deal. I's rather get quality and reliability and availability'. The organization...paid $75,000 to implement the (third party protection) system.

    Translation (for those who need it): Management is telling IT they have to transition to NT. IT says, in order to be stable, we have to add third party help. Management says: "Here's a blank check."

    It goes on to say that Unix, w/o third party software or service achieves "availability in the 99.9% range, as opposed to 97% for NT."

    Now, what's the difference to a business between 97% and 99.9%?

    IBM's NetFinity Availability Program guarantees 99.9 w/ NT. Cost: $220,000.

    HP Mission Critical guarantees 99.9 with NT for a mere $300,000.

    Imagine going to your boss and saying "Hey, how'd you like to save $300,000?"

    JL Culp
    Business Technology Consultant
    Chair, LPSC
  • They should be afraid. As a rock climber I once knew said (and probably quoted from someone else) "If you're not afraid, you're not alive."

    Linux should be afraid. Fear is the perfect motivator. Does it suck being afraid all the time? Perhaps. But it keeps the mountain climber on the mountain, and keeps your users from being afraid of falling behind (for whatever reason they need to move ahead).
  • > Wonder what would happen to the NT's stats if you opened up a copy of IE or Netscape and started browsing the net with it.

    Where I work, it's a firing offense to use the servers for your personal use, like web browsing. Most other companies take a similarly dim view of such activities.

  • Where I work, they spend more money buying sandwiches for meetings than you would pay for an NT server license.

    Not everyone works out of their bedroom.
  • > Give me a break. If you need them, today, then put them in. Otherwise use something else. The world is full of armchair quarterbacks.

    "Linux: Do it your damn self, and stop bothering us."

  • Why do you assume that Linux doesn't have hundreds of really good coders working (at least effectively) for pay? Strictly in the kernel (and off the top o'my head): Alan Cox, Steven Tweedie, Dave Miller, the guys VAR pays to port to Merced, (I think) Ingo (God of the P3), and I know a few others. Going farther out, Apache is paid for by IBM now, Jeremy Allison is on SGI's payroll to do JUST Samba (and they've admitted to paying other people to work on Linux, both the kernel and apps).

    Please to be pointing out the PhD theses written by any of them?
  • ok, here's my $0.02 on Windows 2k (and, fyi, I run Linux most of the time).

    (First and foremost, these are just my impressions of Win2k...not cut in stone by any means)

    First, my computer is a p200 mxx, 64 megs, ~1 gig NTFS, ~2 gigs ext2. W2K found all my devices and configured then almost perfectly. The only thing it didn't get was my Voodoo 2, but I can run GL Quake in Linux :-)

    The system runs faster than NT 4 ever did. Some of you may than scoff NT 4's performance, but let me say this: I started using Linux because NT 4 was too slow. W2K (approximately) matches the speed of Linux in performing tasks (starting WP vs starting Word97). There's only one other nice change. It hasn't BSoD'd yet. Its stable and quick.

    Now, for all you Linux zealots: problems w/ win2k.

    Its a beta. I understand that. But it really shouldn't stop being able to look up things via DNS. Its an infrequent problem, but its annoying.
    Next, it does kinda take over for you too much. I was surpirsed after a while of using W2k that my application icons in the start menu had disappeared...Windows had a cheerful message telling me that it had optimized my Start menu. I really would have prefered if I could have asked it to do that for me, but ah well. Next, I used to run NT in 1600x1200 perfectly. W2k seems to have trouble drawing at that resolution...I had to revert to 1280x1024 (fyi, its a Matrox G200 SD, 8 meg - drivers come w/ Win2k).

    Conclusion. If MS can clean up the problems, Win2k will be *very* nice. Although it can't run servers up the wazoo like Linux can (than again, NT Workstations was never designed to run servers, and therefore shouldn't be tested, IMNSHO), it runs well, far better than any previous MS OS.

    Note to MS: Open up the source to NT/W2K. Open Source development of NT would speed up removal of bugs, and I would think that NT would probably speed up as a result. Plus, if the good of Linux and the good of NT could be mixed together into an GPL uber-OS, I would be happy...hell, I would even pay for it...

  • Maybe when you were working there, you thumbed through a ZD publication or two.

    What do you know? They're full of ads for Intel and NT-based products. Big Super Suprise!
  • Currently the VC++ from microsoft produces far superior x86 code than GCC. I'm surprized that
    nobody has commented on this fact yet.

    Although GCC is one of the most portable compilers, the RTL generation routines aren't well suited
    to the register-poor x86 architecture. The main difference, however, is the code scheduler.
    GCC doesn't do much P6 style optimization, where VC++ in conjuction with Vtune from intel is quite
    an effective optimization tool for the x86...

    It would interesting (but unfortunatly impossible task) to find out how much of the difference is
    due to simply the difference in compilers...

    Just my 2cents worth...

  • Isn't that kind of shortsighted?

    Linux has many advantages over NT, including increased configurability, open source, GNU utilities, etc.

    A few skewed benchmarks don't mean anything. There are benchmarks out there that show Linux beating Windows NT. Benchmarks don't mean that much when there are a lot of other benefits to a platform. It's kind of bad to base your decisions on one benchmark, when you have to consider everything else that you get with the system..
  • Many posters have indicated that this was a benchmark designed by Microsoft to make sure that it only measured the performance of the things that they were expecting to do well on.

    Fine. Why don't you design a benchmark that you think the free Unixes will do better at? I mean computer performance, not price performance. :-) One obvious thing that comes to mind is this risible Samba thing being replaced by NFS. Another is generating dynamic pages instead of static ones. But that's just the start. What else would prove interesting?

    And what about running a variety of operating systems on the same hardware? What about BSD? What about Solaris for an x86?

  • by Anonymous Coward
    NT's kernel was designed for symmetric multiprocessing from day 1. Every part of the kernel was written with thought given to whether or not it needed to be locked against another processor.

    Linux, on the other hand, was originally written for single proc. As I understand it, they only recently started supporting SMP -- and then by having large granularity locks that keep multiple processors out of huge sections of the code at a time. (The article talked about Linux having a single lock around the whole TCP/IP stack!) To fix this, you basically have to go over every line of the code and only lock the things that need to be.

    The interesting thing to me, is whether the Linux development model will support this well. Writing SMP code is much harder than single proc code. All those race conditions, deadlocks, and missed data contentions to worry about. People really have to understand what they're doing to get it right. Already there's complaints about the 2.2 kernels not being as stable as the earlier single big lock kernels.

    Of course lock granularity doesn't explain the whole picture. NT still trounced Linux pretty badly in even the single proc case. There, I suspect it's just a matter of Microsoft having a greater number of highly qualified people working on the system than Linux does. Not that Linux doesn't have any highly qualified people, but rather that MS can get more of them. Paying people for their labor actually seems to work sometimes.

  • Posted by Jeremy Allison - Samba Team:

    Good question, but as I wasn't there (I was invited, but declined as I was giving a talk at the Paris Linux Expo) I can only speculate.

    I doubt it was the context switches, as in all the tests I've done on NetBench these are down in the noise. In this case it may have been the filesystem as it takes some tricks to ensure you are running with an optimal ext2 setup (and rememver there were NT kernel people there tuning the NTFS setup). But Ingo at RedHat has done quite a bit of work on this so I'm still hopeful for a 2.4. re-test.


    Jeremy Allison,
    Samba Team.

  • How many different servers can you put onto one machine with NT? With Linux? What kind of performance do you get when you have a mail server, DNS server, Web server, etc. all on one machine on NT versus Linux?

    These are all things to consider before dismissing Linux because of one benchmark.
  • Remember that the test being conducted (although not mentioned in the article) was with tuned boxes. The NT guys tweaked their system, and linux was tweaked by some linux uber-geeks. Out of the box, Linux still beats NT.


  • Benchmarks are good for one thing: Benchmarks.

    Benchmarks are fine and great and all, but in all my personal experience changing servers from NT to Linux gave everyone a performence increase... I know this is mearly anecdotal evidence at best, but that's what has worked for me.

    [Silly Analogy]
    As for the samba tests.. it's something like this: Microsoft makes up a game. Microsoft doesn't tell you how to play the game. You try to learn the game... Microsoft beats you by a little.
    [/Silly Analogy]

    Of course, this test doesn't show reliability though.. how long could they each handle those loads? Just the (what hour?) time it took to run the test or 24x7 for 6 months....

    Anyway, to incorparate everyone other post we'll see: Well, we'll get better.
  • for just a second...

    "What fails to kill me makes me only stronger."
    -F. Nietzsche

    Thank you. Now quit crying and start coding.
    That is all.
  • by David Price ( 1200 ) on Friday June 25, 1999 @12:52PM (#1832501)
    In the Mindcraft tests, both the NT and Linux boxes were run quadruple-barreled with four ethernet cards. NT has a way to bind a CPU to a particular NIC, so on a 4-CPU machine, one CPU can be tasked exclusively to each NIC. I believe the ZD tests repeated this configuration.

    I think this feature explains, at least in part, NT's superiority in multiple-CPU raw service.

    A side note to flamers: please, PLEASE don't treat these results as suspect or corrupt. I don't think they are. Don't think of them as a defeat, think of them, like ZD said, as a roadmap to show where Linux needs improvement.

  • by Knight ( 10458 ) on Friday June 25, 1999 @12:52PM (#1832506)
    I used to run these benchmarks, and worked with the people who wrote them. They were designed to work best on Intel hardware with NT and IIS. It was intentional, and to use a ZD benchmark in this type of comparison is laughable.
  • by Codifex Maximus ( 639 ) on Friday June 25, 1999 @12:52PM (#1832511) Homepage
    NT has put many services in kernelspace and has largely bypassed their HAL in favor of multimedia performance - especially video.

    NT uses a multithreaded process model for IIS and SMB file-services that results in higher throughput but less stability. A single thread of the main process may die without completely destabilizing the server but if the main process dies then all child threads die.

    Linux divorces the graphical user interface from the kernel thus ensuring stability (framebuffers are available for video enhancement though) and implements most services as userspace daemons.

    Linux uses the forked process model to provide services to multiple users. This modem achieves stability in that if one process dies, the others continue as if nothing had happened. Both Apache and SAMBA operate in this way I believe.

    NT has chosen performance over stability.

    I believe that with kernel enhancements and profiling, any bottlenecks in the networking system can be eradicated causing Linux to perform much faster and possibly even beat NT in tests such as these.
  • I don't know that Samba and Apache selected the forking model by choice. Not to long ago threads really didn't work under linux, so there was no choice. I had a similar experience while working on a video game called Golgotha. We *really* had to have threads, but they didn't exists or didn't work properly.

    The kernel has had fundamental support for them for a longer period of time, but a reliable thread safe libc (and libX11) did not surface until recently (the last year or so). Even now the distributions are struggling to get everything converted over to the new glibc.

    Even though linux now has good thread support, gdb has trouble with them (last time I checked, which was a while back). Also, apache and samba are not linux-only products. The safest and most portable approach to take back then was to use a forking model.

    Other than taking up an too much memory and thus causing swapping, I don't see why a threaded model would be any faster than forking model if pooling is used. Pooling keeps a fixed number of processes running all the time and dispatches request to them. When they finish they sleep and wait for the next request. This way you have the safety of a separate process space and you avoid the forking (as in you "forking piece of sheet") over head. You let each pooled process have a fixed lifetime in order to clean up any leaked memory. Some systems have libc leaks that can't be avoided, so this is important to long-term stability.

    You might argue that a thread context switch is less expensive than a process context switch. Under NT (and 95), all threads are scheduled without regard to what process they belong to, so at most could skip some page table changes when going from one thread to the next. I doubt that is the case, because threads jump from ring3 to ring0 and back while executing system functions (ensuring they have different page tables). Also threads have a separate segment mapping for fs. A 3->0->3 context switch is accomplished by an interrupt in NT. One way to speed things up in linux might be to have a way to pool system request up in ring3 and then dispatch them all at once in ring0. NT did this with their GDI code. Doing this for all system calls wouldn't require a huge change in the kernel, but it would make user code harder to write as system calls have to be parallelized. GDI doesn't require any changes to user level code, because there are no return codes to worry about for graphic drawling operations. Xlib has the ability to que up commands as well. But that is not going to speed up apahche. :)

    Another possible advantage to using a thread model is faster IPC. With something like Apache, there shouldn't be very much IPC going on except to dispatch a new request and possibly lock protect common files.

    I have not looked at a single line of apache code, so I can't say for sure, but there seem to be a number of httpds running all the time, which would signify that it is using pooling. So why is it slower than the NT counterpart?

  • The only thing your post proves is that you don't have any operational experience with NT Server. The 16 color VGA or S3 driver running is not exactly "causing slowdowns" on your server. And if something is going to crash on that server, it's certainly not going to be the video driver (unless you have a hardware problem).

    You're starting with the conclusion (NT has kernel graphics, Linux doesn't) and working backwards.

  • Ok, since many people still thought that NT would come out on top in the new tests, the results aren't much of a surprise...
    But here is our chance to show the world [and MS] why Linux and other OSS projects are such a good idea. By quickly implimenting fixes to the problems brought to light by these test, we can prove how much better OSS is.

    Proposal: Annual or semi-annual benchmarking of NT [or the current MS server platform] and Linux [and any other OS's that want to compete I suppose]. By doing similar tests regularly, we can show how efficient OSS can be at fixing current shortcomings [as if 24hr bugfixes aren't enough].

    Just a thought.
    BTW: Sorry for the overuse of the "OSS" buzzword ;)

    I'd help with implimenting fixes myself, but I'm not exactly an expert coder [I don't think "Hello World" will help Linux beat NT]


    If at first you DO succeed, try not to look astonished!

  • I think that this should make us zealots think twice about where Linux stands. I would very much like for Linux to be the fastest but it isn't. I know many will advocate Linux's other strengths like reliability but we really don't for sure because there isn't any real tests done on this. And besides, I am getting the impression that Windows 2000 may in fact be very reliable (can anyone with a beta confirm this?). That leaves one of the last advantages in that Linux is open. But being open source is one of the most fundamental advantages of all. Even now, there are many people improving the kernal as a direct result of these tests. Linux 2.4/3.0 should be a much faster web server.

    But I don't think that this means that speed for web serving should be any more important. Getting back at Microsoft is not a reason to improve Linux in my book. There is are many other fronts that Linux heading toward like the desktop, embedded devices, and hand helds. I can imagine that if Linux is tweeked for web serving more than normal that some test will find Linux useless for embedded devices or something else that is important.

    Microsoft right now sees Linux as direct competition as a server. It will be nice to see Linux compete back but don't expect NT to stand still. There are other servers also. How does Linux compare to Mac OS X?

    And no more excuses. Linux is not the fastest. Deal with it.

    For now.


  • > but if the main process dies then all child threads die.

    AHEM, HELLO, BULLSHIT. This exactly DOES NOT happen under the windows process/threading model, and IS what happens under most other models. I personally consider it a FEATURE when the main thread cleans up the suboordinate threads when it dies. If the main thread dies, YOUR APPLICATION HAS QUIT. There is no reason to keep the rest of the threads around.

    I defy you to show how EITHER model adversely affects system stability.
  • I agree completely.

    NT is a 9-second Mustang that has something major break every couple of runs.

    Linux is a Toyota Supra Turbo (my example) that can make it down the track in 11s, but also corner, stop, and go hundreds of runs without problems.

    Part of NT's speed is from specialized hooks into the kernel for IIS, and SMB. They traded stability for performance.

    Linux' design concentrates on stability, rather than speed. No specialized proprietary hooks into the kernel that add complexity. Not quite as fast on the track, but you don't have it blow up every couple passes.

    For the price difference between NT and Linux, you can always spread the load over an additional machine to get the performance and keep the stability.

    There is no question for good administrators what is more important. I choose stability and well-roundedness over the 9-sec. mustang any day...
  • by schala ( 63505 ) on Friday June 25, 1999 @12:56PM (#1832560)
    "In this corner, a AMD K6-300 with 256mb RAM, 10 gigs of disk space, running (insert your favorite distribution)..."

    "In the other corner, two cardboard boxes; one labeled 'Windows NT Server,' the other 'Microsoft IIS'..."

    This all inspired by:
    This amounted to a 41 percent performance difference but showed that, even on cheaper systems, NT came out ahead.

    (Yeah yeah, apples to apples...)


  • > Pretty much everybody has eschewed the microkernel model at this point

    Is there ANYBODY left on slashdot who knows what the hell they're talking about?

    NT is a microkernel. They're embedding it now.

    BeOS is a microkernel.

    MacOS X is a microkernel.

    HURD is a microkernel. (okay, that doesn't count)

  • 1800 hits/sec * average 2k/hit * 8192 kbits/kbyte = 29,491,200 bits/sec, or 29.5 MBits/sec.

    In other words, more than enough to saturate your 100Mbit ethernet line. (I think they used 4 NICs in the original test.)

    I think you're making a much better pro-Linux argument then all of the folks here jabbering about the $1000 Linux webserver beating the $1000 NT server.

    Essentially the only thing the benchmark shows is that almost noone has the sort of bandwidth that either IIS or Apache can put out. Perhaps for some internal solutions, but there if you want blinding fast, you're probably not doing your transactions over HTTP. Just bothering to measure this stuff is completely ridiculous.

    I'm sure many of you write "system administrator" on your tax forms rather than "Linux advocate", so keep it in mind that if you're ever faced with a problem that requires sort of throughput, you can solve it with a cluster of NT/IIS boxes. Until you run into that problem, keep doing your job by using Linux/Apache without worry.

  • I work at a major computer manufacturer and my job is testing the stability of our enterprise storage solutions under different configurations. RH 6.0 Linux, NT 4 & 2000, NW 4.2 & 5, Solaris, etc.

    I have done testing which shows that NT is less stable under heavy load (Heavy I/O for extended period, Linux has seen load averages well above 100 for extended periods without problems, NT quite often BSODs when the tests last at these levels last for extended periods) than Linux, even when using some 'beta' Linux drivers for our controllers.

    This is not FUD, it is the truth.

    It is ignorance, like what you are spreading, that is keeping Microsoft's pockets lined.
  • On the main page of their test, zdnet state: 'despite significant tuning improvements made on the Linux side, Windows NT 4.0 still beat Linux'..

    They didn't, however mention the fact that they formatted the fileserving partitions into 4 separate partitions to improve WinNT's performance on the front page, did they?

    Although I can accept that Windows NT might possibly be able to beat Linux, the wording of that reveiw doesn't make me particularly confident it was 100% un-biased.

    On a completely off-topic note: while i was editing my preferences the number of comments on this story more than doubled, in about 5 minutes. wow.
  • This is a weird, amped-up benchmark most closely approximating a really small but insanely trafficky intranet.
    I certainly do not routinely see NT boxes performing in such a manner in the real world- and I think it's a very fair question whether even these crazy 4-way 4-ethernet-card monsters would stand up to real world conditions acceptably.
    I understand one issue is latency- in other words, if it is faster for NT to serve 200 pages to one place and have another request sitting there for 20 seconds, it does it unhesitatingly to get the numbers measuring higher. Apache apparently is much more willing to pay attention to that one request sitting around getting old, and to balance out the load so that nobody gets too lagged. Of course, this is not being tested for.
    This has nothing to do with MS having better people: it is almost entirely due to tradeoffs being made entirely in favor of benchmarks just to get to a place where they can produce numbers like this and have people saying, "I suspect it's just a matter of Microsoft having a greater number of highly qualified people working on the system". Never forget that the benchmarks are by their very nature an exceedingly narrow view of what the job really is. As such, the numbers become meaningless- not only meaningless in the sense of 'I don't care, I'm sick of rebooting the thing', but meaningless in the sense of producing realworld results that measure up to what the benches suggest. It strongly appears that NT servers are capable of flurries of extreme activity, but also lag pockets and serious unreliability issues- in other words, even if the machine has not crashed, your chances of getting guaranteed good response are not that great- the NT server is busy running around serving something it has cached to people in line after you, because doing that increases its benchmarks drastically. This consoles you not ;)
  • > That is, Linux is LESS STABLE than microkernel architectures because a bad driver or module can't bring down the system. Linux performs better than microkernel systems.

    Weeeelllllllll.... depends on the microkernel.
    NT clains to be a microkernel, and from what I've read of the design docs, it kinda sorta is. But it loses on the device driver front, because a bad driver will bring down NT every time.
  • ... and next time the advantage will be mine.

    You think that the Linux-kernel coders rolls over and play dead? I don't think so...
  • > Last message I read about it on klm says that performance has recently matched apache.

    Any Apache developer will tell you that's nothing to brag about.
  • by Frater 219 ( 1455 ) on Friday June 25, 1999 @01:12PM (#1832698) Journal
    I find these studies inadequate as data to inform a purchasing decision. While MS will claim that they have proven NT to be better than Linux for Web and file serving in the general case, I disagree. Here's why:

    These studies do not address price/performance. P/P is one of the most important metrics in making a purchase decision; these studies measured only peak performance. That the prices of the Linux-based and NT configurations tested are not given indicates to me that Microsoft wishes price to be disregarded as a factor in purchasing decisions. To do so would be an irresponsible act for any purchaser. Consider that NT license fees increase dramatically with number of clients, while Linux's price is constant and lower than any NT option.

    These studies do not address options such as clustering. Clustering is a common solution to the problem of constant high client load. It may well be a better solution (in P/P and in peak performance terms) than simply boosting processing power with multiple processors. It also has reliability advantages.

    These studies are not generalizable to other hardware configurations. While MS will claim that they prove that "NT is faster than Linux" inherently, they do not. The HW configuration was selected for the first Mindcraft study, which has been proven to have been engineered to favor Microsoft. Hence the hardware configuration itself is suspect. An across-the-board comparison on various configurations, with P/P as well as peak performance measured, would be a more reasonable comparison of the virtues of the OSes themselves, and would also highlight particular combinations of HW and SW that are worthy of consideration for purchase.

    These studies do not address security. The release version of MS IIS has outstanding security holes, including the recent one disclosed by eEye [eeye.com]. This was a root compromise which took eight days for Microsoft to admit, and two more to fix. Microsoft classically avoids the subject of real-world security, preferring the proven-worthless tactic of security by obscurity. Security, of course, is a major consideration to be made in purchasing.

    These studies do not address stability. Stability, like P/P, is an important metric for purchase decisions. It helps one determine how expensive a system will be to maintain -- one that requires regular resetting or reconfiguration in order to keep operating will cost in manpower; one which crashes a lot will cost in downtime. Downtime costs money in an enterprise situation, and hence should inform purchase decisions strongly.

    These studies do not address changing real-world needs. A real server system is rarely left serving static Web pages forever. When needs change, performance will likely change as well. Building a system to meet a single, narrow-minded need is likely to lead to a dead end in terms of scalability.

    These studies demonstrate nothing about the future. Based on past trends, one can expect the situation for Linux-based OSes to get better and better. The next version of Windows NT will likely offer decreased performance on the same hardware (due to increased resource consumption by the OS itself) whereas future versions of Linux will likely improve performance. Buying heavily into Windows NT leads one to platform lock-in which may damage one's ability to escape the expensive effects of bloat.

    In short, I do not believe that MS has demonstrated that there are advantages to purchasing an NT system over a Linux-based system for real-world file and Web service. Wise system administrators, IS/IT managers, and CIOs should stick with the proven security responsiveness, stability, price/performance, and scalability of Unix-based systems, possibly including Linux-based systems, rather than betting the farm on the Johnny-come-lately Windows NT.
  • Talk about troll.
    Ok. If you insist.
    Replace Microsoft with Linux Torvalds.
    Hello? Anyone home inside there pal? He doesn't make the GNU tools, nor Apache, or any of that. He makes the kernel. Go read before you open your mouth and say something stupid.
    Linux doesn't run SMPs well does it?
    NT isn't designed to be a super computer OS. It's a PC operating system, a general purpose OS. DUH.

    It's a PC OS huh? CP/M was too correct? General purpose? Yeah right. I could list things that will not run on NT (win progs) but I won't make you look any dumber. You do good enough. DUH.
    slashdot.org doesn't run IIS? uh..it's got nothing to do with microsoft..so what?
    If you don't see the implication there you need help...
    EBAY, Microsoft, Dell run IIS, and they have much bigger websites than slashdot.
    Yeah, I never looked at Ebay, and hate Dell. As for the M$ site, every *every* HTTP request I send their servers returns 'Remote connection reset by peer' in Netscape. Nice server.
    Microsoft don't grow engineers on trees, their engineers come fromvarious backgrounds (inlcuding unix). They have enough money to hire the best in the world, and they do.
    Yeah, you'd think at a cerain point there's such a thing as ENOUGH money.. And when they hire a programmer, he may be creative, smart, innovative, all that crap. But he is no longer 'pure'. He prolly expects to get paid when he goes out with his wife for his 'service' (dinner, not sex).
    You probably have your face stuck up somewhere dark to realise you can't comapre vi or emacs to Office 2000 and complain how large Office is etc. Office does MUCH more, and Microsoft's products simplyfy working, which is more than I can say for Linux/Unix.
    All M$ products are overly bloated for one thing, and Office is no exception. Sure it does lots of kewl little things, but hell, I can make a picture that does lots of kewl things with two pencils and some resin (from a tree).
    They simplify working by making everyone work the way THEY want them to. Nice company.
    Sure, there are you guys out there who don't want things to be simple, you'd rather excercise your brains doing "hard" things like mounting NFS/SMB dirves by typing rather than doing it in a few clicks.
    You go ahead and play with your mouse. We know you depend on that little thing. We however know, and will continue to, how to do things without a mouse. Guess who's gonna be using who's programs here?
    I prefer to have the OS do as much as it can, while I get on with the real work. If by any chance, I need to do things manually, I go and do it.
    Oh, Win does as much as it can. Mostly collecting files it doesn't need, eating your prefs/settings, and if you are really lucky it might eat a partition or two. Nice OS.
    And what's your problem? Are you on medication?
    Why, you got something good?
    MS Write, MS Bob? So what? How about MS Windows, MS Office (Word, Excel, Access, Powerpoint etc), MS Visual Studio, MS J++ (if the best selling javaproduct), MS Exchange, MS SQL Server, MS Internet Explorer, MS IIS, MS COM (the most successful component model in the entire world), MS MTS, MS DTC..all pretty much defacto standards now...and that's only to mention a few.
    Those are standards (well, the ones that are in that list) are only because of brute force and M$'s anticompetitive nature. How can you compete with 100 bucks in your pocket when they got a billion they'd just as soon stick up your ass as anything?
    Unlike Linux users MS doesn't claim not to make mistakes, infact Gates even showed the video of Win98's BSOD last year, again this year at COMDEX.
    So you compare Linux *USERS* to Microsoft's *PROGRAMMERS* eh? You think every Joe who uses Linux is a programmer? I pity you and your world.
    That video is something I'd like to see again tho.. always good for a laugh. Although what's the point of reshowing a BSOD, truthfully? Who hasn't seen more than they can possibly count of them already? Kinda redundant if you ask me.

    Happy clicking. I'll be off to play around with my Linux box, to change it's basic settings. Like to see NT (95/98/2K) do that. *chuckle*
  • that doesn't explain the HTTP perf diff though... Is IIS also in the kernel on NT?

    Consider who you're talking about; I'd consider that while IIS itself probably isn't in the kernel, it access top-secret M$ stuff which is.

    also, why can't we do a similar multi-threaded implementation on Linux?

    I don't know. It probably is possible; it just hasn't been done yet. I'd consider these tests to be a sign that it needs doing.
  • Hm, if you look, there are two posts by this AC which are EXACTLY the same. There is an interesting thread on the first one. This one is um... redundant.
  • Right on. I tried saying this after the Mindcraft tests, and half the Linux community sent me obscene e-mails. Hopefully they've all grown up since then.

    As I've mentioned before, I run Windows 2000 Server beta 3 over at WonkoSlice [wonko.com], and it's really, really nice. Granted, as far as stability goes, it is less stable than Linux, although I haven't had a Windows-related crash on my Win2000 box ever since I first booted it up about 4 months ago. But as far as performance, ease-of-use, and speedy setup go, it leaves Linux in the dust. When I first installed Win2000, I did so with zero prior knowledge of how to run a web server or how to configure Win2000. I had my server up and running flawlessly within two hours. When I installed RedHat Linux with no prior Linux experience and only minimal web server experience, it took me days just to get the stupid system running correctly and get all my hardware installed, and by the time I started trying to set the web server up, I had totally screwed the system up and had to fdisk the partition and restart from scratch. Windows was much easier.

    Wonko the Sane

  • by Eros ( 6631 ) on Friday June 25, 1999 @01:19PM (#1832782)
    First thing.... These tests where much better but they still manage to miss the mark.

    Ok, just in case anyone still thinks these tests are worth a shit. I'd like to clearify that this is pure and unadulterated shit. There now that the
    childish remarks are through. I'll do some intelligent speaking.

    First off, I don't doubt this to be shit from the get go. I'm an MCSE (my work paid for it) and I know the insane amount of system reasources it
    takes to run a NT Server alone. Yes, I know how to properly configure an NT Server right down to the streamlining of the registry. Plus, we have
    all been through the multiple restarts and memory that applications won't let go of after using it. Not to mention all the swapping and overhead
    processing. Don't get me started with IIS 4.0.

    There is a new bug found almost on a daily basis that spells doom for these servers. Plus, IIS 4.0 doesn't have near the amount of features and
    configuration possiblities as Apache does. Next Apache needs someone who knows it inside and out to configure it. This is due to Apache's
    extreme flexiablity.

    Say that average joe smith sets up his Apache server and uses .htaccess files on commonly accessed files nested five directories deep. Not
    uncommon with big sites where management is broken up. Well, for every request on the document Apache will check with each .htaccess file
    per directory. So if this file is accessed 100 times. Apache will check 500 times for the rights to that file.
    Because it will check the root to the next directory to the next. And merge the config files it finds along the way. Making Apache check 5 times
    per document requested. But, on the up side if you need infinately specific rights to files. This is a god send that can be reduced by placing
    commonly requested documents near the root of the (don't fork the directories too much)server. And using as few .htaccess files as possible. This
    is why you should try to place as much configuration as possible in the gobal configuration files and preferrably in the server configuration file. I'll
    explain the last part of that last sentence next.

    When Apache is looking into what the rights are for a requested file. It checks certain files in certain orders. And within those files it checks it
    against the directives in the order they are placed in the config file. Meaning if that same .htaccess file that is already slowing things down also has
    the most requested file in the directory near the bottom of the config file. It will take longer. Maybe not whole seconds longer. But, enough on
    heavy sites to make an impact.

    These are just two of the many configuration tips for Apache a person can pick up when they rtfm (Read The Fucking Manual)and even reading
    the source.

    And all the rest of the way IIS doesn't have as flexiable a rights system. Nor does it handle dynamic pages as well as Apache. Infact IIS 4.0 will
    work fine if it isn't that compilcated a site, the pages are static, and the machine is so big it won't ever see a processor load near 100%.

    Apache has that complete control rights system. It handles dynamic pages bueatifully. And doesn't freak when heavy loads hit. It will just keeps
    chugging away.

    As for file serving? I can't say. I'm not anywhere near an expert at samba. But, I do know that my Linux box boots faster, handles heavier loads
    better, and memory management is bueatiful. And to make another remark.

    RedHat should not be the version of Linux they are pitting against NT. Sorry, this isn't a direct RedHat sucks type deal. It's a use Slackware or
    something and so you can minimize the system to do only what it is suppose to do. And recompile everything to be optimized with the systems
    hardware. Maybe not even Slackware. Just something streamlined. Redhat is actually a great system for the home user. That's the way they
    seem to be heading nowadays. And I applaud them for it. My it's now easy enough for my mother to use it. :)

    Personally, once again you can look at the source of the tests and wonder why the outcome is the same. These companies are heavily dependent
    on Microsoft products. And some have been funded by Microsoft. Mindcraft even did there tests in Microsoft's labs. Of course they aren't going
    to say anything bad about Microsoft.

    The real test should be here is X amount of dollars. Put, together the best system you can. Linux would kick the fucking shit out of MS. For the
    amount of the software alone you could put together a Beawolf cluster that would crush any NT Enterprise 4-way SMP box. I know, I tried this
    before when installing many NT systems to upgrade a hospitial. Personally, I won't go there if I'm shot. But, that Linux cluster is up to this very
    day without a reboot performing critical storage and access control for CAT scan images. On the other hand the NT clusters (if you can call it
    true clustering) are constantly having parts of them rebooted.

    Whatever, don't believe this stuff. It's just FUD and the media looking for conflict.

    Eros -- I know what every file on my box is there for..... Do you?
  • NT has preemptive multithreading, and IIS uses NT's threading to do it's work (including getting requests, and assigning them to a thread). You won't see a problem with what you're talking about. That's the whole point of threading, and why NT is faster than Linux. It doesn't matter how big a pipe a certain user has, NT will assign a 'timeslot' for that thread, then move onto the next, all threads will get equal priority.

    It's more likely that Linux is guilty of what you say (certainly, we know that the ip stack of linux is guilty of this). And the performance of linux on SMPs shows how guilty it is of pending other tasks because of it's lack of threading (or use of).
  • > NT is a microkernel.

    As far as I know, NT does use a limited microkernel. But unfortunately this microkernel is not the only thing running in supervisor mode. The device drivers, and the GDI and Win32 is also located within the Windows NT Executive as of NT4. Now I have no idea really how this looks with W2k, but my guess is that it is basically the same.

    The interesting part is the excuses Microsoft presented to their users when they moved the Win32 into the executive. I've seen 2 different excuses for this

    1. from: Moving Window Manager and GDI into the Windows NT 4.0 Executive by Dave Leinweber and Mark Ryland
      One of the side effects of this change is that now the Window Manager, GDI and graphics device drivers have the potential to write directly to other spaces within the Executive, thereby possibly disrupting the stability of the whole system.

      However, from the user's point of view, that potential to disrupt the system has always existed. If the GDI process in Windows NT 3.51 should fail for any reason, the user would be presented with a system that appears to have crashed. The fact that the kernel is still operating is invisible to the user, because it simply appears that the system is not responding. Such is the critical nature of the Window Manager and GDI.
    2. from:Inside Windows NT 2nd edition (p. 51) by David A. Solomon
      Some developers wondered whether moving this much code into kernel mode would substantially affect system stability. The answer is that it hasn't. The reason is that prior to Windows NT 4.0, a bug in the user-mode Win32 subsystem process resulted in a system crash. (...) even a Windows NT system operating as a server, with no interactive processes, couldn't run without this process, since server processes might be making use of window messaging to drive the internal state of the application. With Windows NT 4.0, an access violation in the same code now running in kernel mode simply crashes the system more quickly, since exceptions in kernel mode result in a system crash.
    On the first issue, I most say that I have sveral times seen my Linux machine apparently go dead, and then fixing the problem by telnet/ssh-ing in from another machine and kill netscape. If I was just running a server on the computer I probably wouldn't have any GUI up at all.

    The second issue is worse. IMNSHO what Solomon describes here is a direct design flaw in NT. The fact that an error in the GUI can make services go down is not acceptable in a system your company depends on. In addition to that, when the userlevel GUI would crash, the machine would die, but most likeliy without more trouble. When the kernelmode edition crash, it might do su by writing outside of it's own memory pool, and therefore might destroy data in the filesystem, etc.

    The above paper was found on Microsofts webpage sometime during this spring (it is from April 1996), but I was unable to find it quickly today. Since Microsoft had invalidated the old URL already.

  • Without knowing what configuration those machines were tested in (PC labs didn't disclose that information in their article), that's going to be alittle difficult.

    Microsoft is trying very hard to say "NT is better! see! see!". But I can take that claim apart by simply asking any NT administrator how many times they had to reboot their "4,196 hits/minute" NT box, compared with the measily 1800 linux put out...

    Linux is more reliable, and has greater flexibility (courtesty of the unix philosophy of piping, and making everything modular). No benchmark can, or will, ever convince me that NT is more stable than linux, or more flexible. Maybe NT is faster at some things - whatever parameters were used for the benchmark obviously bear that out.

    But I'll ask you all one question: Where do you think linux will be in one year from now? Think it would beat w2k?

    That's the ultimate question.. Microsoft may have a performance advantage (gasp!) right now.. but we all know how quickly open source moves forward, and how quickly bugs are fixed. Even Microsoft can't beat the distributed efforts of tens of thousands of developers working in concert. No corporation on the planet can.

  • I'm an administrator for a large (1000+ node) NT 4.0 SP3 network. I have direct responsibility for about 120 workstations and 2 servers. I can say from personal experience that NT is very unstable, and it doesn't even have anything to do with the GUI-kernel connection. Our main server, a BDC, file, and print server crashed about every two weeks for a year before we finally gave up and set the damn thing to reboot every night. The thing would be sitting there with its monitor off and users would start complaining that the file server was unavailable. We'd turn on the monitor and sure enough, the machine was locked solid - only a power down would reboot it. And when the machine comes back up all you can do is look at the log files to see about when it crashed. Other than that you get no debugging info at all.

    By contrast, we have a Linux box running our very active intranet web site. We've had it up for 6 months and it has run flawlessly. Interestingly, I set up the Linux web server in the first place because I was tired of IIS failing for no apparent reason (the site had been hosted by IIS).

    Oh, and the Linux box is an old P166 with 16MB RAM, the NT server is a brand new Dell Poweredge 2300 dual PII 350 with 128MB RAM, hardware RAID 5, 3 hot-swappable 9 GB cheetahs. All that reliability hardware wasted on an OS that can't stay up for two weeks!

    It's certainly not for technical reasons that people choose NT over *nix.


    (The links to simpkins.org don't work - I'm moving to a new server.)

Suburbia is where the developer bulldozes out the trees, then names the streets after them. -- Bill Vaughn