Forgot your password?
typodupeerror
Linux Software

NT vs. Linux - Mindcraft Vindicates Itself 468

Posted by Roblimo
from the can't-win-'em-all dept.
MauricioAP writes "the new benchmarks from mindcraft, or the NT vs. linux, aren't good for Linux, especially for RH guys. Check this out." Reliability and "bang per $$" aren't addressed by this test or the results might have been quite different. But within the limited parameters, which may or may not accurately reflect real-world conditions, it looks like Mindcraft has been quite fair. (Please read carefully before judging.)
This discussion has been archived. No new comments can be posted.

NT vs. Linux - Mindcraft Vindicates Itself

Comments Filter:
  • They weren't that outrageous the first time either...

    Let's face it, Linux isn't perfect... it's just a lot better on average than the alternatives...

    Denny


  • what a crying shame...

    ohwell... maybe next year boys..

    yeehaww...

    i'd like to see a freebsd comparison actually..

    chixdiggit.
    -r.
  • Ok, at the risk of opening myself to flames, I would say that the tests (for once) look valid. Doesn't look good for Linux, and we all know Microsoft is gonna have a blast in the PR dept.

    I think it is important to note though that in MS you have NT 4 to pay for, then IIS, and all the rest. So Linux certainly is cheaper, and it's uptime is better. And Linux has better SMP support. Also add in tech support (assuming you outsource for both NT and Linux) and Linux still kinda-sorta comes out on top.

    IMHO (strapping on the asbestos) they both have their uses. Like it or not, there are some things that NT is simply better for. And ditto for Linux too. I personally believe Linux is a better server platform despite the Mindcraft tests because of uptime and efficiency.

    Just my random unorganized thoughts.
  • These benchmarks were released on the day of Gates' Comdex Keynote? Coincidence?

    Bah, who am I trying to fool?
  • lets see there is a newer version of redhat. complete with a new glibc contains alot of speedups. there is also a new kernel with a big crapload of smp improvements. this isn't news. mindcraft will never come out with benchmarks showing linux as the victor. These benchmarks don't even say where improvements need to be done since they don't use the newest software. I'm sure the nt server had the newest service pack.
  • I couldn't care less how Linux performs compared to NT, or any other OS. The simple fact of the matter is, I get far more work done in Linux than in Windows. I haven't tried any BSD's yet, purely through lack of time, but I would guess that the same rules would apply. For a programmer/sysadmin such as myself, the benefits of a Unix-like environment just beats the pants off NT. There's no way I'd use NT to do what I have to do. I'd go insane within two weeks...
  • So Windows NT is a faster file server and marginally faster web-server on single-processor machines. I don't think anyone expected the results to be reversed for the second test. But look at it this way: NT's strength is (currently) in raw performance and that'll take a while for the Free Software community change. But what'll never change about NT is the price while Linux servers continue to improve their performance. Linux is currently able to take a substantial slice out of NT's customer-base, and it's a slice that's getting bigger as Linux-based software develops. What are MS going to do to win back Linux converts, then?

    I'm curious as to whether anyone reading (okay, biased readership, but stil...) has actually decided Linux is not the solution for their business, and decided that, all in all, paying for NT is a more cost-effective solution, rather than deciding to go from NT to Linux.
  • They wrote:
    "Why didn't Red Hat use a Linux 2.3 kernel in Phase 3?

    They told us that it was too unstable for them to be sure of getting it working in the short time we had to run the Open Benchmark
    "

    Yeah right :(, like they don't know 2.3 is devel ....

    The guy who wrote the FAQ really could go into politics someday... He's go at bending the truth.
  • by dox (34097)
    I would really like to see mindcraft publish some benchmarks on the *BSD operating systems, more specifically FreeBSD. After all, its used on such networks as hotmail, yahoo, and cdrom.com

    dox
  • If they had suddenly told the world that Linux was faster, they would have called themselves liars. They had to come to this result (which is the direct result of non-realistic test results, and probably giving Microsoft enough time for foul tricks (knowing about the test, they probably created a special service pack optimizing for this situation)).
    I'm quite sure a Red Hat 6.1 box with an updated kernel (2.3.28 is MUCH better with SMP and also has some very nice TCP/IP improvements over 2.2.x) would do quite a lot better. (Anyone still using 5.2 in real life, by the way?)

  • Do you think, that the choice of client might make any difference?
    BTW, It sees,e that 1processor performance is about the same both for Linux and NT, right?
  • Where I work we have 4 kinds of computers
    hpux
    nt
    solaris
    linux

    hpux sucks. Those computers sit there and suck and crash all day. People are begging others to take them

    nt. these make good workstations and ok file servers. they don't suck as bad as the hpux's and can actually do some work

    solaris. these are the big babies. they sit there and work all day. these guys do the big shit. webservers firewalls and other bigshit

    linux. these sit around similar to the solaris and do everything. weather they are work stations or dhcp or file servers. when you don't need anything to high end.

    bottom line is that you don't put nt on a 4way smp box. you put solaris on that.

  • by BlakeCoverett (102826) on Monday November 15, 1999 @12:51AM (#1532813)

    Note the date at the top of the referenced page - June 30, 99. (Which explains why they are using old builds of Linux and old NT service packs.)

    -Blake (who didn't realize the Linux crowd hadn't already looked at this updated benchmark

  • by Anonymous Coward
    The document is from june 30th and has been discussed at slashdot before. I really don't understand how this could pass through as a slashdot story, or why people suddenly 4 months after feel a need to discuss this as if it was a new thing... Let's kill this thread now shall we :-)
  • i seem to recall quite a few disparaging remarks directed against mindcraft and others.


    perhaps an formal apology is in order. oh, wait --never mind; this is /.
  • for 5.2? Yes we still do. the boxes are dialup servers. they never die, well.. the update is not needed so why should change something that goes as well?
  • We're still using 4.2 on a box here...the philosophy of the others is "If it ain't broke...." you get the idea...besides it's our only linux box, everything else is solaris.
  • The one thing I couldn't determine was whether they were striping the network traffic across the cards on the NT based systems.

    Since these machines look network bound this just might make a difference :-)
  • Conclusions

    Mindcraft's credibility and reputation have been vindicated.


    While dont disagree with the results, I think the above conclusion might have been the one they were aimming for.
  • by Anonymous Coward
    There is nothing new about these tests. Yes, it shows once again that NT beats Linux at serving static pages over a 100 Mbit connection. Again, dynamic web pages are not included in the test. Do I have to say how unrealistic this is? Most major websites serve pages with active content these days and very few af them have a 100 Mbit connection.
    C't magazine has so far posted the most realistic comparison so far. I agree that work has to be done on the TCP/IP stack and kernel locks in the linux kernel but I am convinced that these issues will be resolved soon. Then microsoft will go and find another situation in which Linux performs worse than NT and will focus their PR machine on that.
  • by kdart (574) <<moc.liamg> <ta> <trad.htiek>> on Monday November 15, 1999 @01:00AM (#1532825) Homepage
    concede that NT is faster than Linux 2.2.6 at ultra-high server loads. I believe the reason most people don't see much difference between NT and Linux in everyday life (in terms of raw performance) is that the vast majority of system loads that most people see are within the linear region of the graphs, where performance shows as being about equal (left side of the graphs).


    Note, however, that the tested kernel (2.2.6) is one prior to the single-threaded-TCP fix. I would like to see these tests done with a more recent kernel.


    Again, we must concede that on unrealistically high loads, in an unrealistic test scenario, a professionally tuned very-high-end PC with 4 CPU will outperform an older Linux kernel.


    However (sorry Microsoft), that doesn't matter to me. What is also important is reliability, maintainability, cost, support, standards-compliance, and a host of other things. For me, Linux still beats NT when all these factors are considered. Also, if I wanted a very high-end SMP box for web serving, I'd probably choose Solaris anyway. Microsoft, you're barking up the wrong tree. Let's see this test repeated, but compare NT with a Sun UE450 next time.

    --

  • I think you're mistaken.
    First, this report is quite good IMO, there are many signs which show this test is more fair than the former one.
    OTOH, the apologies would have been in order just for the aggressive tone of some people independ of this test.
    All in all I would say perhaps some people in the "linux community" (I hate this wording) have learned something, but mindcraft learned as much or more. This fairer and more exact test shows this, in the FAQ they even admit they had done configuration mistakes with linux in the first test.
  • I can't imagine anyone, even a lunatic, would ever think that MS's products are the best. What will 2000 do? Crash. The code base for windows is an ugly mess full of kludges. That is why they continuously push the release dates back. Windows 98 was originally windows 97. Linux on the other hand has a much more easily manageable code base. Any advantage NT has will be short lived at best. I'll bet you own stock in Microsoft, right?
  • by GC (19160) <giles@coochey.net> on Monday November 15, 1999 @01:06AM (#1532830)
    Mindcraft:

    The major performance problems are with the TCP stack, which is single threaded in the 2.2.x Linux kernels, and with large-grained kernel locks that degrade multiprocessor performance. The Linux community is addressing these performance problems and others in their 2.3.x kernel series.


    Well, I'm glad that they recognise that work is being done on this. It is very much the case that Linux does have SMP scalability problems, and I think we all knew that prior to this report.

    Regardless, I still stand by the old motto, there are lies, damned lies and statistics, run Linux, run BSD, run NT, do what you will, but be sure to be happy, with what you run. I would like to see how NT fares against Linux & BSD in the real world, how about this test:

    The test will last for one year.
    The machines will be under constant varying Web & File serving Load.
    The NT box will also run a 16-bit application.

    I think we all know what's going to happen over time here...

    You can't test NT performance over 15minutes of file/web-serving, NT may have only leaked 15Mbs of the available 1Gb in that time.

    OK, I don't know whether they tested for 15minutes or not, but I did look and cannot find anything regarding the duration of the tests. Can someone please comment on this?
  • Ok, so we get another look at Linux (which has been developed primarily on single processor machines and started as a test system) compared to NT (developed by a company with effectively infinite hardware resources for its developers) and really expect the SMP scaling to compare?

    Let's remember that the 2.2 kernels were just the start of SMP for Linux where NT was written for SMP from the beginning. Ok, so Linux lost this round? So what?

    These tests are good for the industry (both Linux and NT) - they show each where they need to improve. Linux needs improvement. It will improve. Lets run the benchmarks every 6 months and watch the way things develop.

    John Wiltshire
  • Actually, NT's strengths include ease of administration, volume and breadth of applications, and ease of finding developers & admins, in addition to speed.

    40% faster is a bit more than marginal, and in fact, if I could save 40% in horesepower on the $10k servers I buy, I can more than than pay for that NT license.

    Linux will always be great for cheap, departmental, non-mission-critical servers where cost is more important than performance anyway. But if you're already pay $10-$20k for a server, another thousand for the NT license is NOT A BIG DEAL (especially if you get better performance). Price matters, but only in relation to total machine cost.
  • You're right, anybody can make an NT boot, but only a few understand the security model thoroughly. Even most MCSE's don't, so your choice here is:
    1. Buy NT, get somebody who understands it, and can see through it's obscure "happy, happy" colors GUI, pay a truckload for this.
    2. Download a linux dist, get a geek who lives for it, and understands it.

    Fact is: NT's GUI only serves as a way of making you believe you can administer it properly, while you can't.
  • It wouldn't have been fair to use a 2.3.x kernel -it is accepted, even by the Linux community that:

    a) 2.3.x kernels are development kernels and may be unstable

    b) 2.3.x kernels are essentially beta software releases and not mainstream releases - the equivalent competitor to 2.3.x is whatever alpha/beta software Microsoft happen to have got for NT5 whenever [if] it surfaces

    As a Linux user I have to concede that it looks as though Mindcraft have made every effort to be fair in this test.

    So the questions are,
    * What can or is being done to [safely] jack up the performance of Linux ?
    * Did the test identify any specific bottlenecks ?
  • Everything looks fine to me too. All I can say is that I am disappointed at Linux's performance. Still won't stop me using it though.

    However, I do have a question that maybe someone here could answer - if in testing NT proves to be faster than Linux, why then in the real world does Linux always feel faster? Web sites that run on Linux/Apache always seems more efficient and seem to load faster than ones on NT/IIS, but the tests here show otherwise!

    Don't get me wrong - I'm not saying the tests are fixed. If there were Redhat Engineers there doing the tuning (and I'm sure if they weren't really there we would have heard about it), I certainly can't say that the tests were fixed. But the tests certainly don't seem to reflect what I seem to witness in the real world? Maybe, just coincindently, the NT/IIS servers that I connect to happen to have lower bandwitdh than those with Linux...!

    Anyway, here is an example:
    NT/IIS: http://www.dvdexpress.com
    - this is one of the site where they have multiple servers to handle the load. I think they go from www1 to www9, maybe higher. And it always seems slow...

    Linux/Apache: http://slashdot.org
    - this, AFAIK, runs on 1 web server (I think the config is 1 web server, and 1 oracle server). Correct me if I'm wrong. However, it definately isn't 9 or 10 webservers. And response time is always good.

    Granted, the back end on both is completely different - DVDExpress runs on SQLServer, and slashdot on Oracle. But there is still a noticeable difference.

    Anyone care to comment on this? Why does the real world never reflect 'scientific' testing?



    T.
  • Benchmarks were made on low-latency network and with four network interfaces. In reality HTTP server can get load this high only with very high latency of the network -- clients simply can't be that close to the server by the network topology -- backbones cause huge delays. I doubt that with high latency network (can be simulated in laboratory) and single gigabit interface instead of four 100Mbit/s the results will be the same.

  • Mindcraft are our friends. They go out and they tell us what is wrong with our product. They tell us how much worse than a rival product it is. Stop wingeing. Make it better....
  • Reminds me of an old story where they set up a contest between a hand shearer (of sheep) against the new-fangled electric shears. The manual champion won and everyone thought the electric shears was going to be tossed .... then the electric shearer got another few pounds of wool off the hand-shorn sheep, ie less wastage == more profits.

    While people might not think a few percentage makes a lot of difference, it should be pointed out that in high volume businesses, companies like Wal-Mart sustain a long-term competitive advantage over their peers by adopting a pervasive mindset to control their costs. While Linux may not be a gas-guzzling speed champion on pre-slected race-grounds, the lack of restrictive licenses (operational cost less dependent on #connections) and the ability to control your own environment (ie upgrade at your own pace) offer value in other ways. These savings would add up when hosting very large web farms.

    Different horses for different courses.

    LL
  • Tests for 1 processor systems with NT clients are to conspicously missing. This, conincidentally, is where Linux should beat NT.

    Mindcraft hasn't made up their results, but there's no need to when you select your benchmarks carefully.
  • In light of the Mindcraft results maybe this story should have had a different logo

    A skinned Tux being eaten by Bill Gates perhaps ?
    :-)

    P.S. Do not regard this as Flamebait - I use Linux ! Honest !

  • Since Microsoft has been trying to encourage companies from switching over to unix, maybe someone should run a benchmark on NT against Solaris, or any other commercial tested-and-trusted unix.

    Results might be interesting, if it doesn't get skewed/screwed again.
  • We've got a K6-233 setup with NT server and a 386-40 setup with linux as an internet gateway. I chose NT for the server mainly because linux doesn't have any support for netbeui. With NT I can turn off netbios over tcp/ip and use netbeui for the local file sharing. This way if someone were ever to actually break into our gateway from the outside, they wouldn't be able to get to any of our business information. Also one of the programs we use is dos based and has a server component. I don't have the time to play around with trying to get dosemu to run stably with networking support just for the thrill of running linux on the server. NT works and does what we need it to without any real drawbacks. Unlike many of my friends I'm not a linux or open source zealot. I prefer linux over any other operating system, but I'm not foolish enough to think its the best in every way and in ever situation. If it were NT wouldn't even be on the map. I can't comment on whether NT is faster on our fileserver than linux would be, but I can say with certainty that linux is faster on the 386-40 than NT would be, especially since its running off 8 megs of ram. Not exactly a supercomputer but it does its job of ip masquerading fast enough to deal with our 56k connection in real time which is all you can ask of any computer doing that job.

    I like NT to tell the truth. As a simple fileserver it does a good job in my experience. But it has flaws in its stability and security. I wouldn't use it someplace where you had to really rely on it. I'd use something else, maybe linux, maybe not. I would of course depend on the task and which tool was the best solution.
  • So you're saying that RH6.1 is a factor of two faster than RH6.0?

    Come on.

    Sure there is currently a better version of about everything. But, given the time scale upon which linux evolves, there will ALWAYS be new versions of components by the time a place like Mindcraft finishes writing up a detailed white paper about what they did.

    If those updates rectify the factor of two the benchmarks see, then bitch for a rematch. Until then, accept reality, and work on making the next version better.
  • Has anyone else considered setting up their own machine to try out some different benchmarks? Admittedly, fileserving and webserving are the two main apps you want a server to perform, but what's the reason we all use Linux at home and NT at work?

    Surely there are some more real world comparisons we can make to push the point that Linux is a more usable, sturdy and fun platform to work with.

    The web is a great leveller, so why don't we start putting up our own "official" (!) pages detailing where linux beats NT hands down. Then we can really put the willies up MS.
  • I was always uder the assumption that /. used MySQL not Oracle... but I could be wrong... Also slashdot (after the andover.net buy) runs with 4 servers now I think.. One SQL server and the traffic is load balanced between three linux/apache boxes. Now I am not a network engineer by any means but I am pretty sure that I have read this..
  • by arivanov (12034) on Monday November 15, 1999 @01:26AM (#1532850) Homepage
    Why the hell are we discussing a benchmark ran on a hardware config designed especially for NT:

    1. MindCraft once again used a quad ether (but skipped anouncing it) and the infamous "EtherStripping" break your switch stuff.
    2. Mindcraft once again used the Dell machine which has a RAID running better under NT than under Linux

    The benchmark is faulty by design:

    1. If you want these speeds you use a Gig Ether on the server in full duplex mode not a questionable technique that actually breaks lots of real networks.
    2. If you want real OS becnhmarking you use an architecture that is equivalently supported by bothe OSes.

    Overall:

    I have tested Linux with GigE (it can almost pull physical speed on machines much cheaper than the Mindcraft Dell monster) and NT has been officially tested by most GigE manufacturers. The results used to be available at the packetengines site butit looks like they were dropped when moving the site to alcatel. Anybody a link please? I would not quote them so nobody blames me for flamebaiting...

    It will be rather interesting if someone finally does this benchmark on a sanely designed network (no etherstripping BS) and with proper hardware.

    To conclude I expected better from RH than accepting a doomed bench (on hardware and in a network setup where they cannot win).
  • I find this tit-tat issue funny. DOES IT REALLY MATTER? For example while they mention that the Zeus server has the same problem, at least there is a choice on Linux.

    NT is a good OS. And yes IIS is a good Web Server. But there is NO CHOICE!!! And that to me is a bigger problem.

    You see problems can be fixed in both Linux and in Apache. But what happens if there is a problem in NT and IIS? Can I switch Web Servers? Not easily. Can I fix IIS? Not at all.
  • NT performing better than Linux, I guess I can accept the results. But in what sense are the benmarks a Solaris-level comparison. Don't Microsoft try to compare Windows NT and Linux on points where useually Solaris is the only winner? OK NT performs better than Linux, does everybody need that performance? Just some little doubts I have
  • "And Linux has better SMP support. Also add in tech support (assuming you outsource for both NT and Linux) and Linux still kinda-sorta comes out on top.
    "

    Um. What makes you say that Linux has better SMP support? Most test seem to indicate that NT makes more efficient use of multiple CPUs. And what makes you think there is better tech support from 3rd parties for Linux? I'm not saying there isn't, but I've certainly found the tech support from large resellers for NT to be good.

    I'm not saying that you are wrong, but these are both pretty contentious claims without any evidence. Linux's superior stability is probably widely enough accepted.
  • The speed of Linux development renders most of these sort of contests invalid before they hit the street in any case. Anyone with a modicum of coding knowledge could tune the Linux TCP stack and SMP threading to smoke NT in those trials (and much has been done since 5.2 toward that end). You just cannot say that about NT (and not be a flaming liar, anyway).
  • Obviously Microsoft is the one who is behind all this. They have their programmers and developers sit down and come up with a benchmark spec that they believe they can tweak NT into performing well on. At the same time they try to find areas where Linux is not as strong. After a few months of coding we get service packs for NT and mindcraft conducts its "independent" study. Of course NT comes out ahead, big suprise.

    It's the same kind of thing that Apple does when it compares the toys they sell with PC's. What I really like are the photoshop benchmarks where the Mac is so far ahead. What they don't ever bother to tell you is that apple long ago made changes to their OS and put in system calls specifically for Adobe Photoshop. Then there are the straight CPU benchmarks where they take the few instructions from the powerpc that are significantly faster than an equivalent on x86 and say that the processor is faster overall by this factor. Some powerpc instructions are slower, but then they never tell you that. It's the same thing here. Rather than get bent out of shape we should spend our time doing an honest analysis of both platforms and outcoding the sons of bitches.
  • According to Mindcraft's tests, the bottleneck was the Kernel. In particular, the TCP/IP stack is single threaded.

  • Umm, way before I ever used IIS on NT, I had the pleasure of running Netscape Enterprise Server on NT.

    There are like 20 web servers for NT.
  • ". Download a linux dist, get a geek who lives for it, and understands it. "

    Sadly, this is _not_ a recipe for success. A geek who knows all the command flags for ls by heart and prides himself on being up to date with _all_ the latest bind vulnerabilities is not your ideal sysadmin. You need someone who sees the wood as well as the trees, and administrator who can think strategically as well as perform competant operational tasks.

    In this light, you realise that a good sysadmin is not someone who understands an OS thoroughly. It is someone who understands the aims of you IT systems thoroughly, and knows how to implement those aims properly. There's a world of difference.

    That said, yes, I think the TCO of *nix is generally lower once you are talking about large installations and Enterprises. For smaller organisations I'm not at all sure that is true. A 20 person company with a need for a file and print server is perfectly suited to an NT box.
  • This sounds a lot like all the things people said the last time: "Wait until feature X is ready in Linux".

    Face it: The current version of Linux is tested against the current version of NT. Reading the article, it seems that enough people were there to tune everything on the Linux side, so just believe it: NT is better in some things than Linux. And surely, we can think of other circumstances where Linux or another OS is better than NT.

    Linux is not the answer to every question.
  • Linux is updated on a daily basis in many ways, and using a newer distribution with new glibc, newer apache, samba, kernel, etc. Could be a little more realistic. Granted NT might still be faster but who's making more improvements faster?

    Show how fast linux is catching up by using updated software and do another test today!
  • by Weerdo (24976) on Monday November 15, 1999 @02:13AM (#1532883)
    Reading is difficult:
    Note, however, that the tested kernel (2.2.6) is one prior to the single-threaded-TCP fix. I would like to see these tests done with a more recent kernel.

    Again, we must concede that on unrealistically high loads, in an unrealistic test scenario, a professionally tuned very-high-end PC with 4 CPU will outperform an older Linux kernel.

    (taken from the mindcraft report:)
    Phase 3
    Phase 3 used both a one- and four-processor configuration of the same Dell server. Mindcraft used the same version of Windows NT Server 4.0 as in Phases 1 and 2. Red Hat chose to use Red Hat Linux 6.0 upgraded to the 2.2.10 kernel ("Linux" in Phase 3). See Phase 3 of this white paper for the other software and hardware changes that Red Hat made.

    File-Server Tests
    Figure 3 shows the file-server performance we measured and the scaling between one- and four-processor configurations. Linux file-server performance on a four-processor system increases by 43% over a one-processor system. Windows NT Server, on the other hand, improves performance on a four-processor system by 105% over a one-processor system.

    . Also, if I wanted a very high-end SMP box for web serving, I'd probably choose Solaris anyway. Microsoft, you're barking up the wrong tree. Let's see this test repeated, but compare NT with a Sun UE450 next time.
    When in danger, a cat makes strange moves..
    The test was about Linux and NT, NOT Sun (a total different kind of cup of tea) and NT. Microsoft isn't aiming at that market (yet) thus testing Sun (solaris) vs. NT is way out of touch.
  • by moonboy (2512) on Monday November 15, 1999 @02:16AM (#1532885) Homepage
    Some people may flame me saying, "You don't care anymore because Linux is losing." Wrong answer.
    Here's why: I LIKE LINUX
    I genuinely like it. Yeah, so these benchmarks say it is not as fast or as "good." What is good anyway? Good to me is: reliability, configurability, usability, extensibility, scalibility, inexpensiveness, fun, etc. Linux shines in all of these areas and more. Yes, that's right, I said Linux is FUN. Linux is plainly more fun to use. I don't care what any some benchmarks say. We all know that benchmarks are unrealistic. They don't test "real world" conditions and situations. I think we should use their criticisms (only if valid, of course) to help Linux be a better operating system, not to beat some other OS.

    ----------------

    "Great spirits have always encountered violent opposition from mediocre minds." - Albert Einstein
  • by Dacta (24628) on Monday November 15, 1999 @02:18AM (#1532887)

    I'm sorry, but that post is just so wrong it is laughable. If you had found some site that ran NT and was faster than Slashdot (not hard to do) you would be flamed out of existance.

    Where to start?

    Slashdot does use multiple webserver - it caches static pages, and

    Slashdot does not use Oracle it uses MySQL. Big difference in websites.

    "The response time is always good" ????? Not from where I am (Australia) it isn't. Subjectivly, dvdexpress seemed faster to me. Anyway, what does that prove? You are closer to Slashdot than dvdexpress? Slashdot has more bandwidth?

    Dvd is graphic intensive, and takes longer to render in Netscape, too.

    You can't compare two totally dissimilar sites, on totally different hardware.

    I bet I can find apache sites that seem slower than NT/IIS sites. EG: www.Apache.org always seems very slow to me. What does that prove? NOTHING!!!!

    Look, I want Linux to be faster than NT as much as anyone, but we can't even be seen trying to spread FUD like MS does. Imagine if MS stuck that up at Comdex as by "a Linux Hacker, posting on the Linux nerd site slashdot.org".

    People, please think for a moment before you post, and before you moderate comments like that up. Ask yourself this:

    If this was posted on www.microsoft.com, and it was an arguement for NT rather than Linux, would we have trouble disputing it?

    Reader of Slashdot don't need to see arguments for Linux like this, we need to see the opposing view, so we can learn what we need to improve.

    Damn.. I just know this will kill my karma, but that is crazy!

    --Donate food by clicking: www.thehungersite.com [thehungersite.com]

  • by bueller (100729) on Monday November 15, 1999 @02:26AM (#1532889) Homepage
    Run Chicken Little the sky IS falling!

    I am surprised so many people haven't realised there is no such thing as a non-biased benchmark, and that, shock, horror, Linux is perfect (yet).

    Benchmarks must reduce the scope of tests and make assumptions, which are not always true, so as to be possible. They also need to be done at a point in time, and not wait 'for the next version, which is so much better'. Doug Ledford [redhat.com] of RedHat was there for the tests and has his spin [redhat.com] on the tests, where he talks about the difficulty of getting a meaningful benchmark. The Tranaction Processing Council [tpc.org] are continually revising their benchmarks to remain meaningful. The big guns, IBM, Sun, HP, Oracle, Sybase, Compaq and Microsoft all use different TPC benchmarks to try and gain ammunition for sales staff. At some point Linux people will need to do the same.

    The Mindcraft benchmarks look to be as fair as any I've seen. The reaction to the benchmarks is far more informative than the results themselves.

    Linux can still be improved, it isn't as strong as other operating systems in some areas. The fact there is development occuring proves this point.

    If you don't like the results, find a benchmark and configuration that gets the results you do like! Where there is a real deficiency lend a hand and be part of the solution.

  • by jhei (104839) on Monday November 15, 1999 @02:29AM (#1532892)
    It seems that those benchmarks have been done properly and that in those two benchmarks Windows NT performs significantly better than Linux. It would be useful to think about reasons for this.

    First of all, as far as I know almost every major company has a habit of cheating in benchmark tests. For example video card drivers detect that a test is being run and enable code that skips most of the drawing primitives. This is easy to do in code that is not open source since it would take a major effort to reverse engineer the device drivers. It might be possible that NT has a feature that detects different kinds of tests and optimizes its performance accordingly (if you are for example testing throughput you would trade throughput for latency times). While this is not cheating in usual sense I think that this would be quite useless in normal mixed load situations.

    The second thing is that Microsoft is quite a large company. If it wants to outperform Linux then all it needs to do is install Linux, tune it to its limits and then analyze its performance and find out weak points. Then it makes the same thing with NT. After that it just puts hundred well paint workers to make NT faster than Linux. This is made easier by the fact that if Linux works faster than NT they can just look at sources and figure out what Linux is doing better than NT. Also, it is possible that Microsoft would look at the weak points in Linux and would publish only those benchmarks where Linux performs significantly worser than NT. Anybody who does those same benchmarks would get similar results and the original benchmarks would be considered objective.

    Third thing is that those benchmarks might only test peak performance - performance under high load. It is also possible that the structure of the load is untypical. This is true with most benchmarks; they rarely test systems under realistic conditions. Since I have not looked at those benchmark programs I do not know if this is the case. Anyway, peak performance is important if you want to identify bottlenecks and see what are the limits of programs. Peak performance does not tell how programs work under normal every day use.

    Last thing is that I think those benchmarks are already outdated. What I would be more interested would be performance of cutting edge Linux system against similar NT system.

    As a conclusion I again state that I think those benchmarks look valid. It seems that Linux kernel (and possibly also Apache) still has bottlenecks in its performance. I'm not sure if those have been fixed since this benchmark. However, I think that this benchmark should be thought of as a challenge to improve the performance of Linux. I actually think that Linux did quite well; performance differences are not THAT large when you take into account my comments above.

  • by Paul Crowley (837) on Monday November 15, 1999 @02:31AM (#1532895) Homepage Journal
    Though the PCWeek tests favour NT (for reasons well covered elsewhere), they do not do so by the ludicrous margin the original tests gave, and the Linux's community's cries of "foul" were entirely just and accurate: these tests show that Mindcraft did indeed load the die.

    Furthermore, Weiner has never managed to justify the claim that he had asked for help in "several Linux discussion groups" when setting up the first test: searches show that he only posted *one* article, and that was met with requests for clarification that was never forthcoming. So as it stands we're quite justified in believing that Weiner is a flat-out liar on top of his other sins. That's not vindication.
    --
  • by pb (1020) on Monday November 15, 1999 @02:32AM (#1532896)
    I saw at least one person mention this, but I'll say it again:

    The real problem with the Mindcraft benchmark has nothing to do with most of what they cited: the graphs are painfully clear that the limited resource is network bandwidth. That's why it's so funny when they say "We'd never test a server that's resource-limited. What's the point?" That's what I'd ask them now.

    Note that they test with one and with four processors, but do not test with one or two ethernet cards. In fact, they never mention the complete hardware configuration of the machine, so we just have to assume they used the same f*cked-up four ethernet card configuration.

    There were actually benchmarks put out by c't explaining this [heise.de], with graphs, and real tasks. Linux performance generally did much better until that second ethernet card was added. I'll believe them, that it's a software limitation in the TCP stack, but I'll also believe that they were exploiting a known problem in the Linux kernel--that only happens under these strange conditions--to their ends. Until they show some benchmarks with the ethernet cards mentioned as a factor.

    NT vs. Linux Server Benchmarks [kegel.com]: informative and interesting, but most of all truthful, with a link to the c't article I mentioned, and many other more realistic benchmarks.
    ---
    pb Reply rather than vaguely moderate me.

  • They looked valid to me before phase three. I love Linux, but it has it's limitations. These need to be worked on. It's funny, becouse when users are actually put under the gun, this is how they react:

    Linux User - Linux is MUCH better then NT in terms of performance, uptime, and general reliability.
    Random NT Statments - Have you ever tried a test of the two?
    Linux - "Sure, I do it all the time"
    NT - "Ok, let's try, here we go *beatbeatbeat*. Oh look, your conclusions are wrong.
    Linux - "The tests where rigged! They cheated!! MOMMY!!"
    NT - Ok, let's try it again. You come watch, and tune the Linux box to your hearts content.
    Linux - Ok, we'll whip yer but in a fair fight, no problem.
    NT - Ok, here we go. *BEATBEATBEAT*. Oh look, the number look exactly the same
    Linux - Well, we've fixed many of the problems since the tests. We'd whip you but in a rematch sometime.
    NT - Ok, how's about right now? Let's go.
    Linux - Err, no thanks. But next time we run the tests on 386/25s!!
    NT - That's stupid.
  • Actually, they do mention the full hardware configuration at the bottom of the phase 1-2 [mindcraft.com] part, and yes, its 4 NICs again...
  • by evilpenguin (18720) on Monday November 15, 1999 @02:56AM (#1532908)
    a good sysadmin is not someone who understands an OS thoroughly.

    I can't believe anyone would make this assertion. While I agree completely with the other half of your statement, that a good sysadmin must understand the aims of yout IT systems and know how to implement them properly, I would say that thoroughly understanding an OS is a vital prerequsite for that second clause.

    Anyone who has worked with an operating system, a programming language, heck, a make and model of car, knows that there are essentially four levels of competence. First, complete incompetence. You have no knowledge, you try things and you screw things up. Second, basic competence. You have some knowledge. You successfully carry out basic tasks. You use the system without damage, but there are vast areas about which you know nothing. Third, competence. At this level you know your way around. You know how things work. You know what all the parts do. Fourth, high competence (guru). You not only know how things work, you know why. You develop a holistic sense of the system/language/automobile. You can imagine how things work. You can be presented with an unfamiliar situation and you can figure out what to do about it.

    Most people with whom I have worked in IT (and I've been working professionally as either a system admin or a programmer/analyst for over 12 years now) are at what I would call level 3, and a fair number are at level 4. Thorough knowledge of a system is required to be at level 4.

    The notion that one does not require deep knowledge of systems to be a systems administrator is tenable only in a system with nothing ever happens that is outside the training materials. No such system exists.

    If you are arguing that deep knowledge of a system is not required to be a sysadmin, then I sure don't want to work at your company. If, OTOH, you are arguing that deep knowledge of a system is not in itself sufficient to be a good sysadmin, well, then I've been wasting your time and I apologize, because I agree with that...
  • by kuro5hin (8501)
    Microsoft isn't aiming at that market (yet) thus testing Sun (solaris) vs. NT is way out of touch.

    The point the first poster was making, I think, is that by using a 4-way box, and crowing about their advantage on ludicrously high server loads, MS is aiming at that market. That is, once you step into the realm of 4 processor machines, testing NT vs. Linux is just silly, because who in their right mind would use either one for such hardware? It's like saying that my Cessna is a better stealth fighter than your Piper Cub, and ignoring the F-111 because "we're not competing in that market."

    The fact remains that this test only proves that for applications where you should have been using a high-end OS (and, apparently, where stability doesn't matter), NT can pump more bits down the pipe for as long as it manages to remain up. As for "real world scenario," this test sure ain't. I'm a little disappointed that RedHat actually sent people to compete in this test, since we all knew what it would show anyway. They should've just pointed out that the test was silly and they have more important things to do. Corporate pride and all that, I guess...

    ----
    Morning gray ignites a twisted mass of colors shapes and sounds

  • What really matters is what can an OS do for the user -- specifically in this instance where the OS is a server. IS people need to be comfortable with their environment and if it works, use it. Especially when you have to have cpmpatibility

    I've heard a lot of you say "both OSs have their strengths and weaknesses. Both can do a job and do it well." This is the paramount truth. BOTH do have their strengths and weaknesses.

    The most interesting thing I've seend so far from Mindcrafts latest tests were (and I maybe reading the phase 1/2 configureation wrong) is that all the client machines were Windows OS machines.

    Duh Huh. Do you suppose they might work better? I certainly do. Lets try this on a homogenous system (though thats impractical unless they have ported NetBench and WebBench to Linux/Unix).

    The truth is: Yes Linux needs some work. Yes Mindcraft used a 2.2.6 Kernel. Yes you would expect that MS would work better with MS. Yes 95% of the personal computer market is MS. Yes NT and Linux have their strengths and weaknesses.

    But the bottom line is what do you as an IS person (and I say this because a majority of us Linux users are IS people) want out of a server/OS?

    Bang for the buck? Good argument. Holds some water. Reliability? Better argument. Holds more water. Potential? Best argument of all. A wellspring.

    If you must use NT for a purpose, use NT. Its not all that bad. It works well in its environment and by nature its going to work best with other MS products.

    Otherwise, get Linux. Use Linux. Make Linux better. Forget Benchmarks. Forget this silly NT Vs Linux crap. Its all a PR ploy that MS has ccoked up to hit us Linux (AND ALL OTHER OPEN SOURCE OSs) below the belt. They can't beat us in the marketplace, so they have to get us from somewhere. And that somewhere is in our confidence.
  • by Doctor Bob (83606) on Monday November 15, 1999 @04:07AM (#1532954)
    Actually, I've been waiting for the argument that seems to apply most directly: that benchmarks don't really represent the Real World(TM). For example, given the oft-cited media demographics of the "poor little pre-IPOs who just don't have enough money to buy M$'s latest work-in-progress," you'd think there would be some tracking of multi-function boxes.

    For example, if you're using Linux as a major player at work, do you have a _pure_ web server box and a _pure_ file server box? Or do you have one or more boxes that really do lots of different stuff all the time?

    This is a totally different measurement and gets directly to the heart of the "Bang for the Buck" argument: if I have $K dollars to spend on _one_ box to do everything, would I rather spend it on the best hardware (with cheap, powerful, easy to use) that I can get? Or am I going to have to spread it around on hardware+expensive OS?

    At this point, you're talking real "business case" numbers that you can talk to your boss / capital approver and say "See, the path I recommend makes more sense in terms of total functionality, operator training, total cost of ownership, blah blah blah". (Anybody who's ever had to go through that drill knows what to put there - I just justified a $60K IRIX box for my desk... 8-).

    So, the benchmarks that I'd really like to see are things like the following:

    1. Given $5K, $10K or $20K to spend on a _single_ general purpose server machine, inclusive of OS and any serving technology, what's the best of class? Linux should win here by definition: cheap OS = big box, expensive OS = littler box.

    2. Given the test platforms from #1 above, run through the Mindcrafty benchmarks again. At this point you're comparing dollars to dollars, so the results might be interesting this time. Not useful (steady state measurements don't indicate real life - does the Slashdot effect slowly ramp up or hit all at once... ;-).

    3. Now show me something interesting. E.g., X web clients connected, file server traffic jumps up, then web clients drop off. You know, scientific method and all that?

    At this point, I'd say the Mindcraft data is still far from being useful. Simply put, there's not enough of it.
  • by Greyfox (87712)
    This is rather old news. Blah blah yes NT's faster at the moment when using multiple network cards in a single system. The kernel developers know that NT's threaded TCP/IP stack is making the difference here and they're implementing that in the next kernel. Any speed advantages NT has at the moment will be gone well before Windows 2000 comes out.
  • As many have pointed out, a large part of why Mindcraft contradict the anecdotal evidence is that their benchmarks are based on static pages. However, I think that there is a more important factor contributing to the anecdotal evidence that is often overlooked: the total Operating Environment.

    What I mean by this is really pretty simple. In a Windows Operating Environment, people are encouraged to use very different toolsets than they are in a Linux operating environment. Instead of perl, apache, and MySQL, they tend to use ASP, IIS, and Access/Jet through ODBC.

    ODBC alone is a performance and stability nightmare, especially if it is not setup perfectly. ASP is a piece of junk. IIS is (I guess) okay. In Linux, perl is (arguably) pretty good, so long as you use mod_perl, MySQL is the fastest thing under the sun for those tasks it can handle, and apache is (like IIS) pretty good if not designed for speed.

    We aren't comparing operating systems, we are comparing operating environments (or at least we should be). And testing static pages only totally discounts the afects of operating environment.

    Another thing to look at is "culture". Linux users tend to like carefully crafted point solutions -- that's why we're Linux users. NT users just want to get it done, stability be damned -- that's why they're NT users. I think that this difference has a lot to do with Linux reputation for speed and stability. Even a novice sysadmin, exposed to the Linux community, starts to soak up the ethos of the community. Even more important, the support (including full source code) is in place to allow him to do as well as he would like. An NT user soaks up the "get it done, screw stability, hardware is cheap" ethos of the NT community. And the resources to do it right are often /not/ easily available.

    Anyway, the point is that we are /not/ just comparing kernels. If we were, then we'd all probably be running some custom TCP stack on embedded hardware.

  • by Hard_Code (49548)
    I am just wondering, with all the great things I've heard (and know) about linux...why would this happen? Is it simply a matter or SMP being immature? Is asynchronous IO not yet fully implemented. I'd have to say, when MS ignorantly claims that linux is based on "decades old technology", these two must be a thorn in its side. Sure, most linuxvolk probably never encounter this cieling...but is it being worked on by somebody somewhere? We can't just say every time somebody does a benchmark "Oh, well, you forgot to use the latest kernel that just came out an hour ago, along with X number of requisite patches".

    (I'm not trying to be a troll...just wondering)
  • Firstly, PerlScript-ASP will flunk any performance test you give it right now because it's practically at the fork() level of functionality (new perl interpreter per request). Trust me on this one.

    For some more interesting breakdowns of different scripting model overheads see Hello World Benchmarks [chamas.com]. The aim of those test is not about real world applications, but about the overhead of starting the interpreter. Basically, mod_perl, PHP4 and ASP/VBScript all come out around the same level performance-wise. However it's worth bearing in mind that mod_perl is a _lot_ more than just a CGI scripting API - it's access to the entire Apache server architecture - something the other engines just don't give you.
  • So what if NT can throw out static pages faster? I'm sticking with UNIXish systems because when I use them, I know I've got more power at my fingertips than NT's pop-up happy dungeon could ever give me. I can change the code on most systems... I can craft my own tools by combining existing ones... and I can find the answers to my questions on the Web rather than having to pay "per-incident" fees.

    On top of it all, if anything does go wrong, 99.44% of the time I can fix it without standing in front of the server. NT doesn't have that capability. If a service needs to be upgraded I can do it without affecting anything else on the box.

    UNIX is power because you can insert yourself at virtually any step along the way of any process. You want to do anything in NT, you must do it with whatever APIs MS thinks you need, or pay them to make more.

    I'll stick with the power, thank you.

  • Or else you'd never get more than a 50% improvement on the same hardware. Never heard of an NT box that served pages fast in the real world.
  • by TummyX (84871) on Monday November 15, 1999 @05:19AM (#1533003)
    NT was designed to do this sort of thing - keyword designed.
    Linus didn't design Linux for the kind of work which NT is excelling at in these benchmarks.
    When Dave Cutler sat down and designed NT, this was the kind of things they were trying to do, fine grain kernel locks, high performance and scalability. The market place has unfortunately seen many of the good things about NT get forgotten (portability for example), but NT still stands there with the ability to scale MUCH MUCH better than Linux can at the present.
    Yes you may feel like going out and burning a few MS cds or whatever, but at the end of the day it's true. Improvements ofcourse are being made to linux, and linux may catch up.
    However, I'm actually a bit worried about the fundamental design of Linux itself - I'm not saying it's totally 30 year old technology - far from it - but having experience with linux and NT for quite a number of years now, to me, NT seems like it is better designed and had good goals.
    I won't bother to argue about whether they were met or not here tho :P.

    Some fiddly things about Linux/Unix I don't like are:
    -Threading. According to IBM, Linux native threads are mapped processes!??! which makes their JDK rather slow compared to NT.
    -Mutexes/Semaphores/CriticalSections etc - why doesn't Linux use them? I mean for god sakes what the hell are linux applications writing *.pid files around for? And what about /tmp/X11-unix lock files? ERK.
    -Componentisation - it's happening slowly but only in the past few months (maybe a year). I'm still waiting to see the Unix APIs wrapped up.
    -Registry. I've said it before and I'll say it again :P, the registry is a good thing. Yes when win95 came out there were registry problems but I haven't had any problems since 1996. It's a great idea, it's like having a database to store all your settings.
    Now I don't really care whether the registry is one huge file or several files (user and system) like in NT, but I just want some STANDARD APIs for reading writing settings - fast APIs.
    Ofcourse the registry has other uses too, like storing COM/CORBA UUIDs etc etc etc.
    Being a database it'll definitely be faster than parsing text files, and even better it's much easier to programatically add/remove/change settings (trying to parse text files to do that sort of thing sucks).

    Anyway, it seems everytime something about Linux comes up the response is "someone is working on it". When it comes up again the answer is the same, and then everyone ignores the strengths that NT does have because Linux will have it cause "someone is working on it".
    Just give NT, MS and Dave credit, and move on.
    Linux is not the solution to everything. It's a great free small-medium server & emerging desktop OS. Let's leave it at that for the next year or so.
  • I have no doubt that the numbers given are correct. The question is are the benchmark's constraints realistic.

    For example, I can benchmark my speed on foot vs. a car. We can do that on the drag strip for 1/4 mile, New York city at rush hour, or on a test track in a 10 foot race. I will loose badly in the first test, the second is a coin toss, but I will certainly win the third, no matter which car you choose!

    I am also reminded of another 'benchmark', a Chevy Nova ([poorly] modified street racer) vs. a beat up Honda. The Honda won because the Chevy boiled over.

    So, yes, I believe that benchmark, but I don't think it means what MS would like to think it means.

  • The numbers were better when the test was fairer. A still fairer (ie more realistic) test would be even further in our favour. That they untied the weight around *one* of our ankles does not make it a fair race.

    Benchmarks run by those without an axe to grind (eg c't) consistently come out in Linux's favour. A lot of design work went into finding ones that would point the other way: for example, using four 100Mbit cards rather than one gigabit card. That the actual anti-tweaks for Linux were taken out doesn't mean the anti-Linux design wasn't still there.

    That's why everyone remembers these benchmarks over all the other Linux vs. NT benchmarks. It wasn't because they were particularly well done: they are famous and remarkable because they're the only ones that NT doesn't lose like a dog.
    --
  • There may be people out there that see Linux as nothing but ani-Windows. They want to see Linux beat windows into the ground. These people really fail to see the strengths of Linux itself.

    It is fun. Yup, that's right. I enjoy running it.

    I don't preach Linux to people because it is the fastest horse in town, but because I enjoy it, and I think likeminded people will too. And almost without fail, they agree. Out of all of the people who I've helped transition, I've only lost one of them back to Windows land. You do have to hold their hand in the beginning, esp if they aren't good with *nix in general, but after a short amount of time, they become comfortable and enjoy running it.

    Because it is just fun.
  • by remande (31154)
    That is, once you step into the realm of 4 processor machines, testing NT vs. Linux is just silly, because who in their right mind would use either one for such hardware? It's like saying that my Cessna is a better stealth fighter than your Piper Cub, and ignoring the F-111 because "we're not competing in that market."

    Exqueeze me?

    What would your preferred OS for quad-Intel hardware be? Or, more to the point, if you need quad-Intel performance, what would your preferred OS/hardware platform be (other options include Alpha, SPARC, etc.).

    IMHO, a quad-processor Intel box running Linux is a serious alternative to SPARC for a lot of purposes. My company just bought some dual-CPU Intel/Linux machines (expandable to 4 CPU) as Oracle servers, where we would normally buy a Sun server for the job.

    Go to VA Linux [valinux.com], where we bought these machines. They are turning a profit (what a concept!) selling multiprocessor Linux/Intel boxes, going head to head with Solaris/SPARC.

    Whether Linux qualifies as a "heavy duty OS" depends on what your definition of a "heavy duty OS" is. Remember, to some people, all Unix is light duty, and mainframes are heavy duty. If, however, Unix is considered heavy duty, Linux competes well in that workstation/server range. There are some places where a commercial Unix does better (posts better numbers, has better features, etc.), and other places with Linux can beat a commercial Unix on the same grounds.

    If you are looking at quad Intel machines, you are talking about the five-digit price range--somewhere from $10,000 to $100,000. There are a lot of serious contenders in this space--and most of them are running Unix. There may be some heavier duty contenders, but I don't see anything that is far and away better than Intel/Linux for general purpose computing. Neither is Intel/Linux far and away better than everybody else: Linux is running in that pack with the big Unix dogs.

    Linux is not a Piper cub. If you want to use the warplane analogy, I would think of it more as an F-18. It isn't a heavily specialized craft (like the Stealth Fighter), it isn't very heavy duty (like those IBM B-52s out there); it is a small, tight unit that flies with the best of them, and has advantages and disadvantages compared to its class (top-caliber fighter craft).

    And NT? The F-4 Phantom. The gun used to ship separately, and it is living proof that, with a big enough engine, even a rock can fly.

  • It's simply that Linux is good enough if you're not feeding several T1 circuits to a single server. Particularly if you know what "BSOD [blockstackers.com] " means.
  • 1. Given $5K, $10K or $20K to spend on a _single_ general purpose server machine, inclusive of OS and any serving technology, what's the best of class? Linux should win here by definition: cheap OS = big box, expensive OS = littler box.

    Heck, for the sake of fairness, spot them the OS, so that both teams have the same hardware budget.

    One of the problems with the original Mindcraft test was that the hardware was specifically NT-friendly and Linux-hostile. IIRC, they found a RAID controller that had a sucky Linux driver on it. Face it, a good NT box is not always a good Linux box, and vice-versa.

    BTW, the above would still not put Linux on top. If this test is like the original, here are some reasons that NT beat Linux:

    1: Linux hit the disk slower because it had a lame driver.

    2: Linux has some SMP issues.

    3: Linux has a singlethreaded TCP stack, while NT seems to have a multithreaded TCP stack. This gives NT a natural advantage, as it could split its writes across all four ethernet cards.

    4: Besides pitting NT against Linux, the test pits IIS against Apache. IIS simply outperforms Apache on serving static web pages: I bet you could test both servers on equivalent NT boxen and find that out.

    Microsoft found some places where NT outperforms Linux, and exploited them. No, not Mindcraft; they were specifically hired by Microsoft to do this test.

    Do not fool yourself into thinking that Linux developers will make Linux so good that Microsoft will not be able to do this again. You can always find some way, some pathological case, that one machine outperforms another, and can thus make a benchmark that shows that platform A outperforms platform B. A properly rigged Commodore 64 can outperform Solaris, NT, and/or Linux on a benchmark, assuming that the C-64 advocate can choose the benchmark. I know that, were I in that position, the benchmark would start from the power off condition (Commodore wipes the floor with all of the above when it comes to boot time).

    In this case, the pathological test case was a set of systems serving static pages very quickly. Let me define "very quickly"--on their slowest case, Linux on one CPU--the machine was pumping enough bytes to clog multiple T-1 lines. Who serves that much static traffic over a LAN? Who serves that much static traffic over a T-3?

    Microsoft and Mindcraft pointed out some technical deficiencies with Linux and Apache, and I thank them for it. They chose to do so with a pathological test case, and thus put the truth to "lies, damned lies, and statistics".

  • In _The Psychology of Computer Programming_(*), Gerald Weinberg wrote a story about a programmer who was flown to Detroit to help debug a program that was in trouble. The programmer worked with the team of programmers who had developed the program, and after several days he concluded that the situation was hopeless.

    On the flight home, he mulled over the last few days and realized the true problem. By the end of the flight, he had outlined the necessary code. He tested it for several days and was about to return to Detroit when he got a telegram saying the project had been cancelled because the program was impossible to write. He headed back to Detroit anyway and convinced the executives that the project could be completed.

    He then had to convince the project's original programmers. They listened to his presentation, and when he'd finished, the creator of the old system asked,

    "And how long does your program take?"

    "That varies, but about ten seconds per input."

    "Aha! But my program takes only one second per input." The veteran leaned back, satisfied that he'd stumped the upstart programmer. The other programmers seemed to agree, but the new programmer wasn't intimidated.

    "Yes, but your program doesn't work. If mine doesn't have to work, I can make it run instantly and take up no memory. "

    Moral of the story: correctness first, then speed.
    How fast would NT be if they fixed it? ;)
  • Saying that the TCP/IP stack is ``single threaded'' is completely misleading. The TCP/IP stack in Linux isn't threaded at all. It is not a process. It is just a bunch of code operating on data that is shared among processes/threads and interrupt service routines. Any process in the system may enter the TCP/IP code, as may any interrupt on any processor. The Mindcraft cluelessness about how the kernel works takes even more away from what little credibility they have left.

    To say that the Linux stack is single-threaded implies that it has an internal thread which does all the work. Obviously, this is far from the case.
  • *note* slashdot is obviously overworked, this test was posted 4 months ago, only differnce is it was reformated. *sigh* This is OLD news.. These benchmarks exploited a flaw in the 2.2 kernel's IP stack. Everytime you add another network card you effectivly cut the performence in half. This was caused by the fact that 2.2.x locked the "whole" IP stack everytime one of the "other" network cards were in use. *duh* this is why mindcraft used 4 network cards instead of 1 100Mbit network card. If they needed more bandwidth they should of used a Gigabit network adaptor.
  • by SurfsUp (11523) on Monday November 15, 1999 @06:53AM (#1533068)
    These benchmarks were released on the day of Gates' Comdex Keynote? Coincidence?

    Everything we learned from the antitrust findings of fact would suggest that it's not coincidence. I therefore have to hand it to Microsoft, not for winning yet another questionable benchmark contest, but for maximizing the spin benefits thus obtained. That is true art.

    This is an example of a Microsoft "spin" attack. There will be more to come - the battle has just begun. Let's admit it, Microsoft won this round in the spin battle - mainly because we weren't fighting, and took a punch below the belt. Referee - what referee? OK, so we learned something about the rules of the game.

    Let's do two things:

    1) Make Linux better so we win these high-end SMP contests as well. I don't know about you, but I'm treating myself to a 2-processor machine this Christmas, and I want it to kick ass running Linux. With 100,000's of geeks likely doing the same time, we can expect 2000 to be the breakthrough year for geek-SMP. We also need better file I/O. Not that the existing I/O isn't damm good, but it has to be the best, right? (Personally I'm putting my money where my mouth is - as a developer, I can make a difference and thanks, MS for getting me steamed enough to jump in.)

    2) Master the PR game. Microsoft can, and will hurt us with PR. Some may say "so what, who cares what Microsoft says, it's how good Linux is that matters" and there's a lot of truth to that. Nonetheless, why don't we cover all the bases? We need an open-source think tank cum swat team whose only purpose is to anticipate, forestall and counter the PR moves that Microsoft makes. We have the collective intelligence to play that game well, and hey, it's a fun game when you win.
  • *note* slashdot is obviously overworked, this test was posted 4 months ago, only differnce is it was reformated. *sigh* This is OLD news.. These benchmarks exploited a flaw in the 2.2 kernel's IP stack. Everytime you add another network card you effectivly cut the performence in half. This was caused by the fact that 2.2.x locked the *whole* IP stack everytime one of the "other" network cards were in use. *duh* this is why mindcraft used 4 network cards instead of 1 100Mbit network card. If they needed more bandwidth they should of used a Gigabit network adaptor. p.s. Run the exact same tests with 1 nic card and or wait for 2.4 to be released and you will see how f*cked these tests were.
  • Again, we must concede that on unrealistically high loads, in an unrealistic test scenario, a professionally tuned very-high-end PC with 4 CPU will outperform an older Linux kernel.

    The first problem with your statement is the assumption that no one has servers running loads like this on a regular basis. I'm sure that Amazon.com and Yahoo would argue that for their purposes, these loads are unrealistically low.

    The second problem with this statement: since when is a 4-processor server very-high-end? For many things, this type of box would be plenty, but for other tasks it would hardly even be considered as a viable option. Please see last weeks article on super-computers for data on very-high-end computers.

    And finally, the NT box wasn't even tuned well. My God, they put the swap file on the same physical drive and the same partition as the OS. When using NT, this is a sure-fire way to make sure that your system is NOT performing optimally. I'm impressed with how much leeway that they allowed on this one, and you still won't pay attention to the numbers. I'm not bashing Linux or advocating Microsoft, I'm just looking at the data they've presented. BTW, Sun's SPARC chips aren't really any faster than any other processor on the market. People buy Sun stuff because they want Solaris and they want the applications that people write for Solaris and because (like Apple) the person writing the OS is likely to have a more well-oiled machine if he is also married to the hardware. But trust me, Sun gear is not any more the shit than any other mainstream hardware manufacturer.
    ------------------------------------------------ ----------
  • You know... NT isn't my favorite OS by any means, but I've got two NT servers in my comms room that have NOT been rebooted in one year for any reasons other than to update the Service Pack, which I think only happened twice and was completely voluntary. Your server must be rebooted everyday? Hmmmm... maybe you're just a terrible sysadmin, or... whatever you do.

    There seems to be a lot more NT bashing than would be expected from people who claim to use Linux exclusively. I don't use Linux that much, so I won't make any comments about it, but I do know that NT does what I need it to do. It may not be very elegant, but only BeOS has any bragging rights in that department anyway.

    I think that most Linux enthusiasts really take pride in knowing as much as there is to know about making sure Linux is stable. Well, NT seems to me to be just as stable, there just seems to be fewer people who actually know how to run it properly. I'm not pushing for Microsoft, I just think we're all getting a little hotter under the collar than is necessary. Personally, I prefer Apple products, and consider myself part of the Mac faithful. And I know and accept all of the shortcomings of the OS, just as Mindcraft has demonstrated some areas that can be improved with Linux. But, just because I know that there are problems with the MacOS doesn't mean I'm going to give it up.
    --------------------------------------------- -------------
  • It's called WOW (Windows On Windows). All Win16 apps run in the same address space (WOWEXEC) because many Win16 apps were designed to share data between processes. IPC was super-easy on Win16 because there is no boundary between processes. A Win16 app can simply send a pointer address (maybe via a Windows message) to another app. The other Win16 app can simply deference the pointer. cheap IPC.
  • To a certain extent, you are correct, however:

    In the case of the NT tests they used a patch to assign each card to a processor (affinity)

    This option, to my knowledge, was not available to the Linux boxes. I believe that the TCP/IP stack in the Linux kernel is bound to a single processor. I still see this as a SMP problem and not the fact that they used 4 NICs. It is still correct to say that Linux has serious SMP scalability problems, even Mindcraft admit however, that these issues are being worked on in the development kernel thread 2.3 and will hopefully become a standard performance improvement feature of 2.4 .

    We can't blame NT for supporting a function that Linux doesn't, though it would be interesting to have a benchmark based on the following:

    Using $x, contruct a system using NT and a system using Linux, tune the systems to their full ability.

    With this type of benchmark, Linux can better compete, because of the following:

    1. For low values of x NT will simply not run or cannot be purchased and hence is disqualified from the test.

    2. The system can utilise a cluster topology, which although slightly more expensive for n systems, would at least provide a linear improvement per CPU. For n=4 and mid-range x, the recoup over costs of Enterprise NT Server, still make this viable for Linux to compete with NT.

    It is also clear to note that many Internet websites have an aggregate bandwidth of 512kbps or less and seldom actually reach the loads that were reached in the benchmark tests. It's the case of the limiting factor again.

    Let's get back to the real world: There are many webmasters and sysadmins who are perfectly happy running Linux on their Web Servers, they know the systems work, and they know that they don't have to monitor them as closely as one does with some other systems that are available on the market.
  • by remande (31154) <remande@bigfoot . c om> on Monday November 15, 1999 @08:03AM (#1533101) Homepage
    The second thing is that Microsoft is quite a large company. If it wants to outperform Linux then all it needs to do is install Linux, tune it to its limits and then analyze its performance and find out weak points. Then it makes the same thing with NT. After that it just puts hundred well paint workers to make NT faster than Linux. This is made easier by the fact that if Linux works faster than NT they can just look at sources and figure out what Linux is doing better than NT. Also, it is possible that Microsoft would look at the weak points in Linux and would publish only those benchmarks where Linux performs significantly worser than NT. Anybody who does those same benchmarks would get similar results and the original benchmarks would be considered objective.

    And thus, Linux uses Microsoft's own strengths against it. Re-read the above: Microsoft is the largest, most useful QA department that Linux has.

    What is the purpose of a QA department if it isn't to shake your system until it fractures and tell you where the fault lines lie? Microsoft will likely do this better to Linux than they will to NT itself. And we need pay them nothing but attention.

    Sure, they will publish these results in the worst possible light. But for Linux, competence trumps hype. Linux cannot be FUDded out of existence unless each and every Linux developer can be FUDded into dropping the platform--no ivory-tower business types can decide that Linux is a money loser and kill it.

    Every time Microsoft finds a test scenario that Linux is poor at, or breaks Linux, we see another fault line. If we decide that Linux should be fixed (the answer will often be 'no'--see below), we know exactly what part of the OS needs to be riveted together stronger.

    Now, why would we not want to fix something? There are a pair of traps that Linux could fall into. The first is to fall into the trap of letting Microsoft dictate our development. If we react to Microsoft every time on useless side issues, we keep developers away from what is most useful. If we consistantly fix Microsoft's latest find as a top priority, MS can run us ragged--bad idea.

    The second trap is to worry about exclusive flaws, or trade-offs. Face it, Linux will not be all things to all people, unless Linux itself fragments a bit. Here is an example: You can make a filesystem much more reliable versus power failure if you remove kernel-level buffering. The kernel-level buffering is a big speed boost however: you can save files at solid-state speed, rather than waiting for platters to spin. In most cases, we accept the trade-off of speed over reilability.

    We could get hit with complaints about how badly the filesystem gets hit after someone flicks the Big Red Switch, and "fix" the filesystem to be unbuffered. Then Microsoft could complain about how slow the filesystem was. We'd keep making U-turns. Sometimes, you have to stand your ground and note that you don't do so well here so that you can do better over there. And you have to be willing to say, "If you want , you know where to find it.

  • by zantispam (78764)
    "And NT? The F-4 Phantom. The gun used to ship separately, and it is living proof that, with a big enough engine, even a rock can fly."

    Actually, I would have to compare NT to an F-102 Starfighter. Yeah, it did Mach 2+, but it also took half a state to turn around. The F-18 doesn't go quite as fast, but it's exponentially nimbler...
  • Because the vast majority of people who will heed MindCrafts advice (big corps) are running a Windows of some form on the desktop.

    It'll be much easier to get Linux into the server closet if it speaks the same protocols as existing servers, making the transition invisible to the end users and network.

    Maybe one day if Linux is on 90% of the desktops, then it'll be worth while to test performance of NT workstations connecting to Linux servers via NFS vs. Linux workstations connecting to Linux servers via NFS. Not today though.
  • What is the point of comparing the speed of Apache which is a MultiTASK server with IIS which is a MultiTHREAD server?

    Because they're comparing the performance of various webservers. Each implements it's functionality differently, but the end result in both cases is a client (browser) requests a page from the server. The server then sends that page to the client.

    Whatever the server does during that makes no difference (to me, at least) so long as it gives the right page back to the client in a reasonable timeframe.
  • I'm an agent representing someone who wants to be a basketball player. He's a big fat guy who can't run, jump, or dribble to save his life. He is, however, great at making free throws. He never misses when shooting a free throw. One of his competitors is a lean scrappy dude who's got good hustle and can move the ball well, but isn't as good at free throws. I arrange a free throw contest between the two. My guy wins, of course.

    Tomorrow's headlines in the basketball press:

    Big Fat Guy outperforms Lean Scrappy Dude in Basketball Competition

  • 4x processor with 4 ethernet cards is within this said market. Of course, these benchmarks are obviously just to point out linux scaling problems (which are valid).

    Anyway, if they really wanted to show what NT could do comparable to others in that range, they would have compared IIS with a dell whatever to a compaq alphaserver running zeus. HP (who did the testing) have been able to achieve up to 9000-12000 rps using specWEB on comparable hardware, whereas NT has trouble. Also note that Zeus can handle 10,000+ domains with little degredation, whereas IIS can not (especially if they are IP based).

    Some IBM tests have achieved 26,000 requests per second (15,000 higher than IIS could), but that was on zeus/AIX with 12 processors. Also note that in Linux case "This was demonstrated best when the Red Hat engineers ran the Zeus Web server. Zeus performance topped out at about the same place as Apache, using fewer resources". This problem is also evident on FreeBSD using zeus -- though the FreeBSD tcp/ip stack as is the overall system (on a general level) more tuned to higher loads and hence achieves 10-20% better results. The FreeBSD camp has similar problems with SMP performance.

    Sun can obviously scale to these levels as well -- but their hardware is incredibly overpriced so they really shouldn't be included in such a case.

    Are these benchmarks useful in real-world situations? Well, yes and no. First we must realize that there is a very limited client base for systems serving files over quad 100mbps ethernet adapters (or even gigabit ethernet). Think of how many companies even need this much bandwidth. There are some that do -- but then we realize that there is no major advantage in using a single monster machine to do this task. If a company can afford that much bandwidth, then they can also afford the rack space for a cluster of servers. Even if the bandwidth is in house, it just makes more sense to scale with clusters. The NT camp can argue that at the high end it is more useful -- but a cluster of FreeBSD or Linux machines running apache are a heck of a lot cheaper. As well, try hosting a large amount of IP based domains on IIS. You can't. Then we realize that vbscript (the usual pick with ASP) is only about 60% as fast as mod_perl (yes I know you can use perl on IIS as well). Then there's stability. Duh, how many times have we had to reboot NT servers or restart the IIS service to get it up and running again? I actually have to run a service to restart IIS when it stops responding -- and even then it can cause the machine to crash by running out of resources.

    I'm as guilty as the next guy for quoting benchmarks to combat benchmarks -- but seriously, forget them. They are almost always biased. While the mindcraft tests are valid in proving that Linux has some scaling problems, they really don't translate well to the real world. Just ask yahoo or any other company using apache clusters running FreeBSD (which has similar scaling problems than linux). What about amazon.com? They've recently switched to apache. Do they not have a high amount of traffic as well as dynamic web pages? Will the target of these tests ever likely have a chance at building a site of such a size? Not likely. Mindcraft and Microsoft know that they can never include price or stability comparisons because they would always lose. In so doing, they lose any real world application. It's obvious this is just a blatant attack on Linux -- not an Operating Systems real world application. These test results succeed in what they were tailored for: to spread doubt. So even if they raise a valid point as to weaknesses in Linux, I question their validity.
    ----------
  • If I read other posts in this thread right, the reason that the NT server performed so much better than the Linux server is due to the usage of 4 network cards. Supposedly, the TCP stack under Linux doesn't support talking to more than one network card at a time, so adding network cards doesn't increase performance. Basically I guess, this test was like giving Linux one card and NT 4 cards --- Linux performing 2.6 times better than NT isn't bad at all...
  • You get a lot more out of this sort of test if you have a monetary goal and let both teams build a system for less than that.

    Nobody would be dumb enough to run a webserver off of a 4 CPU Xeon with 2GB of RAM. Nobody.

    For that price they could get four or five 2 CPU P2s and blow the performance out of the water.

    Not only is a 4 CPU Xeon massively expensive, but by the time you scale a data-moving operation like web serving up enough to tax the CPUs, you've saturated the bus.

    Sure, NT beats Linux, on that hardware.

    The tests would have been a lot more even if the Linux people got to pick the components in the system (one faster gigabit ethernet card, etc)... The tests would have gone the other way with clusters. And if you factored software price in, would have been as outrageously in favour of Linux as the original Mindcraft study was for NT.


    Sure, these tests show that Linux has some weak spots, and they will eventually get fixed, but the tests are still biased FUD.

    Do you know of a fortune 500 company where the CEO would say "Build a Quad Xeon with 2GB of RAM and etc etc.. to serve our web pages" or does "Take this $20k and go build us something to serve web pages" sound more likely?

    Restrictive hardware decisions like that don't happen, so testing on those machines is pointless.

    (Not even an NT bigot would do that, because they'd get more from two Dual Xeons than one Quad, and they know it.)
  • by Kaz Kylheku (1484) on Monday November 15, 1999 @12:43PM (#1533187) Homepage
    Using one fast adapter instead of four slower ones woudln't help because you would still fail to take advantage of the other processors. The interrupt servicing would be serialized to one processor at a time.

    By binding each of four cards to a different processor, you allow up to four interrupt service routines to proceed concurrently. The question is whether the service routines can get into the stack without getting in each other's way.

    In Linux 2.0 and before, what happens is that each adapter, when it receives a packet, calls a routine which atomically enqueues it into a global queue. What pushes packets from the global queue is ``bottom half processing'' which is sort of a virtual thread that does its job when returning from system calls or from interrupts.

    I don't have a complete picture of the 2.2 architecture (despite successfully porting a network driver to it from 2.0!) but I think that the bottom half stuff is still emulated in a coarse way. Historically, a lot of code depends on bottom half processing to be atomic with respect to itself and you can't change that overnight.

    The actual pushing of packets from the global receive queue into the upper network layers is done as part of this bottom half processing. So is the pushing of backlogged transmit data down to the network drivers. If only one processor can do this processing at a time, there is an obvious reduction in concurrency. An obvious extension would be to let all of the processors do bottom half processing concurrently. In the case of network input, for instance, all the processors could enter into a loop in which they are yanking received packets from the global queue and driving the higher level TCP/IP code concurrently.

    Here are some of the assumptions that you could make when programming for 2.0 kernels and earlier:

    - bottom-half processing is atomic with respect to itself. That is, it can't be interrupted and re-initiated. It can, however, be interrupted to service lower level interrupts.

    - bottom-half processing happens only at interrupt level one. Thus by incrementing the interrupt level, you could protect a section of code against being re-entered by bottom-half processing. The start_bh_atomic() and end_bh_atomic() macros would just do this intr_count++ and intr_count--. Thus synchronizing between system call (process context) code and bottom half callback can be done trivially, and without touching the processor interrupt mask at all.

    - disabling interrupts protects you against everything.

    The real challenge has been not just in going from coarse grained to fine grained locks, but in reworking the assumptions.

    For backward compatibility, the old mechanisms still exist in 2.2. For example, you can still use sti() to disable interrupts, and can continue to pretend that this works as before. Under SMP however, it is emulated in a rather gross way! If you want the real interrupt disable instruction, you have to use __sti(), which only disables interrupts on the current processor, not guaranteeing complete atomicity.
  • Ok people I can't understand this. Why this Mindcraft crap has put everybody in the wheels???
    Windows NT better than Linux? That's questionable in detail but the general result is that Linux IS better than Windows. Sorry Windows fans, but I had some serious troubles with Windows stuff to say good words from it. Both on workstation and server side. And in the same hardware Linux outperformed NT in every detail, except beauty. However Linux is not a solution for all. Frankly a good professional should measure what OS is better for a specific task. In fact a lot of high-performance servers are better done on FreeBSD. If you need an Abrams-class server then it is better to use Solaris. If your server will look much like an autoban of data with a lot of warehouses, than choose Novell. If you have a lot of interface work and one-tasked server than Windows has a good chance to do the job. And Linux is a hybrid of a rocker/viloncelist capable of playing 7 instruments at once.

    Oh! And don't forget about DOS. They are a Hell on small server systems. Easy, fast and good preformance...
  • Why is it that NT puppets like to say stuff like you just did, I hear it all the time, usually in retort to seeing things like really high uptimes in linux

    No, it's not the uptime I'm responding to. It's the claim the original poster made that his NT box must be rebooted every day in order to prevent crashes. You should read the article before you start name-calling (NT puppet). And you haven't rebooted your server? Not even for security or kernel patches? You may want to sacrifice a few reboots for the sake of security.
    --------------------------------------- -------------------
  • The numbers where MARGINALLY different. Heck, if they made the lines just a LITTLE thicker, they'd overlap.

    And 4 100MBit ethernet cards instead of one Gigbit ethernet card is CHEAPER, which Linux is supposed to be.

    And the machine that the tests where run on where certified by RedHat themselves.

    Nt won, hands down. Now, I do most of my development under Linux, and have Linux on all 5 of my home computers. I love Linux. But in high end situations such as file servering, a higher end NT box will always kick it's ass, plain and simple.

    Hell, untill recently, Linux didn't even SUPPORT nfs v3, speaking of file serving.

    And I dare you to show me a legitimate test where NT was properly setup where it lost like a dog.

    Go on, show me a URL..
  • That's BS. Fortune 500 companies buy many, MANY of these systems for this very purpose. That's why Dell continues to make them. Do you think they'd SELL a product that didn't make them money?

    And yes in many situations, the CIO does say the exact above, primarily becouse Dell would have given him a sales pitch. Not only that, but he'd say 'We're ordering 1,000 of these server for use in ever region'.

    Fortune 500 companies buy by name. Show me a fortune 500 company who buys eMachines to save money.
  • Running 'other apps' on an NT Web server means your a pretty darned small company, or it's a department level server that doesn't do much web serving at all..

    A Quad Xeon is a Mac truck, vs a Corvette. In a transaction based environment, which web serving is, the Xeon will kick your Dual celeron out of the stratusphere..
  • by kuro5hin (8501)
    Wow. You all got way more mileage out of the airplane analogy than I would have thought was possible! :-)

    Don't take me wrong-- I'm a huge proponent of Linux, and I have no doubt personally that it'll be a good choice on nearly any hardware you want to put it on. I go to great lengths to avoid using anything else, as a matter of fact. And if you plonked a quad CPU intel box down in front of me and said "have at 'er", I'd whip out my RedHat CD's faster than you could blink.

    But people like me tend not to care about benchmarks too much. So nix that for a target audience of these silly tests.

    These are basically for PHB's, to whom one $25,000 computer may as well be any other $25,000 computer. And bet your ass, when they open the wallet, there's gonna be someone whispering "Sun" in their ear. Hopefully, comeone else will be whispering "Linux" in their other ear, but who knows.

    Anyway, my point is that I agree with you. I think perhaps the phrase "just silly" in my original post was a little stronger than it was intended.

    Whatever. Benchmarks don't hold a candle to personal experience in my book, and NT sure as hell has a long way to go in the "pleasant user/admin experience" department. This week has not been a good one for me and NT. Oh, how I loathe it...

    Ok, I've vented.

    ----
    Morning gray ignites a twisted mass of colors shapes and sounds

What this country needs is a good five cent microcomputer.

Working...