Please create an account to participate in the Slashdot moderation system


Forgot your password?
Linux Software

NT faster than Linux in tests 723

Mike_Miller writes "The lastest Mindcraft Study claims that Microsoft Windows NT Server is 2.5 times faster than Linux as a file server and 3.7 times faster as a web server. Their white paper shows that NT beats Linux on every test. " Anyone have a critique?
This discussion has been archived. No new comments can be posted.

NT faster than Linux in tests

Comments Filter:
  • by Anonymous Coward
    "A statistician uses statistics in the same way a drunk uses a lamp post - more for support and less for illumination."
  • by Anonymous Coward
    Note the hardware used in the test: A couple of quad-processor machines with RAID. This sort of configuration is more on the fringe of Linux's abilities, while NT is supposed to work at its peak on a 4-processor system, and I'm sure has much more mature RAID support.

    I'll bet if the test were repeated on a couple of single-processor boxes with standard IDE disks, the results would be very different.

    I'd walk away from this test with the following conclusion: Linux needs more tuning for higher-end hardware.

    Of course, note the spin of the article: If you don't read closely, it looks like NT is 2.5 times faster than Linux in some sort of overall sense.
  • by Anonymous Coward
    Mindcraft's 'credibility' is blown (if it had any to start with) as any company that seeks to do nothing twist tests to meet the clients satisfaction can not be considered as any better than straight MS marketing/FUD.

    I note that the tests & machine vary widely from previous Netware vs NT tests - why?
    No mention of relative cost is made which is strange as cost/performance is a rather important factor (how much is NT with 140+ client licenses anyway?)

    Why take a machine with 1GB of RAM - is this typical of the average PC server?

    MS are simply hoping that media will simple report that 'NT is 3.5 times faster than Linux' as they assume (rightly?) that is all corporates will remember. The only answer is to ignore the Mindcraft study and keep publishing (carefully selected of course) benchmarks showing Linux speed. Thats all that counts in the end.

  • here's the reply they got from comp.os.linux.networking:

    I hope it's not too late to change your hardware, because your box is a
    complete waste of money. SMP gives you *nothing* with regards to web
    serving, and it makes your OS flaky as all hell. The RAM is nice, but the
    processor speed is overkill and having 4 of them is just plain wasteful. The
    network card would saturate completely before you even came remotely close to
    using up the resources of even a single P2 200Mhz.
  • by Anonymous Coward
    They claim that NT is better than SOLARIS and NETWARE. Just go to Cut and paste you lazy bastards!

    They are just killing their own credibility, not that of Linux.
  • by Anonymous Coward
    The apache they used had mmap. I was told that could cause slowness... is it possible that was the cause for apache tests.

    I do know Linux supports the slowstart standard on tcp/ip. 2.2.x has this, does NT? If not, is this what does it?

  • And what the hell is this?

    Set OPTIM = "-04 -m486" before compiling

    They set a HRD LIMIT of 500 connections and then mourn Apache crapping out at 1000 connections? Puhleeze!

    Wankers indeed!
  • by Anonymous Coward
    I think there is some kind of misunderstanding somewhere... shtml
    this page says exactly the reverse story, i.e. RH5.2 is faster than NT4 v/sp4 .... 2,5 times and 3,7 times.. (including lotsa graphs etc)
  • by Anonymous Coward
    I've read through a lot of postings to this case, and an overwhelming amount whining about conspirations and buy-offs and misconfigurations and what-have-you. Maybe. Then again..maybe not. This is what psychologists refer to as "Denial". Let's stop whining and start coping and start DEVELOPING.

    "Shut the fuck up! NEXT!"

  • by Anonymous Coward
    Gee, this stuff looks so darned familiar. Back in the OS/2 days, there were any number of 'MS inspired' benchmarks like this showing NT (or other MS product) beating OS/2 in benchmarks. Typically, the situation showed a hand-tuned MS system with carefully selected hardware, and a hand de-tuned OS/2 system.

    Deja Vu, all over again.
  • by Anonymous Coward
    As someone who works in the real world, I have indeed found NT to perform better than Linux as a SMB file server...until you put a load on it.

    When NT 3.51 first came out, I was a big proponent (vs Novell), due primarily to costs. As an educational institution, we couldn't afford Novell, it was that simple. However, over the years, I have learned that NT just can't handle it. As soon as you throw 30 workstations at an NT Server, it starts to grind to a halt.

    My real world tests show that NT is indeed faster than Linux at first, but soon starts to bog down to a point that the only real option is to start adding more servers.

    This is what Microsloth doesn't like you to know about. Only when it's too late and you are forced to buy more servers and clients, are you awakened to the TRUTH.

    One last comment.

    Nobody has mentioned cost analysis and ROI in any of these benchmark studies. For an enterprise/institution, what is the total cost of ownership breakdown between Linux and NT?

    I'd be willing to bet that if/when corporate america figures out that they could save tons of dough and actually increase the usage of their servers, Microsloth will be on it's ass.

    Just my .02

  • by Anonymous Coward

    This seems to be a variant on the wishful thinking "ignore them and all they say and the problem will go away". It does not have to be like you describe. Allow me a simple thought experiment to demonstrate my point:

    Imagine two systems. System (A) is a low end server like you describe, say a Pentium II 200 with 64 mb ram and a 10 gig drive. System (B) is one of the systems they used in this test, or any given quad processor xeon with 5 drives and 4 ethernet cards.

    Strip these system of any OS differences. Which one has the better theoretical performance? Even counting hardware designed for certain OS features, system (B) is the clear winner on hardware alone.

    Imagine that (A) running linux outperforms (B) running NT. What an amazing feat that would be, but just imagine that linux is that good and NT is that bad. Now put linux on system (B). You see where this is going. Even allowing for M$ manipulated "independant" testing agencies to tweak out the performance from linux and tweak in extra performance for NT, there shouldn't be much of a contest. Linux must absolutely shine when it is given the hardware to do so.

    My point now is that linux has been dealt a credibility blow and the original post in this thread is spot on. Linux must have beefier SMP support and better RAID support as well. And these items must be available "out of the box", even if only in certain specialized distributions.

    Let's take this as a challenge and run with it.


  • by Anonymous Coward on Tuesday April 13, 1999 @08:54PM (#1935165)
    Fished out of dejanews -- the Mindcraft folks used the pseudonym ''

    (If this was posted earlier, I didn't see it...)

    Can anybody here respond to this?

    Hi Everybody,

    We're considering using Linux + Apache as a web server. The hardware
    is a 4-processor 400 MHz (Xeon) server with 1GB of ram, a RAID controller,
    and six disks. We have Redhat 5.2 installed and compiled an SMP version
    of the 2.2.2 Linux kernel. For the web server we used the latest 2.0.3
    version of Apache.

    The scenario: we're bangin' on this web server with a bunch of clients
    to try and get a handle on its capacity. Simple static HTML requests,
    no heavy CGI yet. My Apache server is tuned up, MaxClients is 460.
    I recompiled with HARD_SERVER_LIMIT set to 500. Limit on number of
    processes is 512, imit on file descriptors is 1024.

    The problem: the server performs well, delivering in excess of 1300
    HTTP GET requests per second. But then performance drops WAAAY
    off, like down to 70 connections per second. We're not swapping,
    the network isn't saturated (4 x 100Mbit nets), disks are hardly used,
    but the system is just crawling. If it were saturated then performance
    should level off, not drop like this. Neither vmstat nor top show
    anything unusual. No error messages in the web server. Its puzzling.

    Any ideas? Any tips, suggestions, or pointers would be appreciated.

  • In the article it says that they used _Samba_ 2.0.3.. Maybe somebody got confused and put the wrong information in the posting?

    (i'm not familiar with the latest releases of samba and apache, so don't sue me on this..)
  • Bleeding Edge Magazine [] did an extensive "test" [] today (which was sponsored by Red Hat Software and VA Research, by the way) which proves that Linux is faster than NT than Web and file serving. PHB's, watch out.
  • If you think it's that important, feel free to fix it, making memory size a Makefile or config option and send a patch to Linus.
  • libc6 threads? What are those? Oh, you mean linuxthreads, the almost-ported pthreads library, which comes in glibc2, which is known as libc6 on linux machines.

    So, you really mean that you should use pthreads. I could see that. But pthreads aren't nearly so platform-independant as fork() is.

    Based on your post ... in fact, based on your subject, I'd say that every one of the Apache Group's programmers are better programmers than you.

    Let me count the ways:

    • You claim that ``libc6'' (posix) threads are ``real threads'' implying that fork() does not make ``real threads''. Procesess are definately real threads, except they have more overhead. In fact, the terminology for pthread and pthread-type threading architectures on unix-like machines (such as pthreads or solaris threads on solaris) is ``lightweight threads'' or ``lightweight processes.'' They are just like other processes except that they don't have the overhead of things like seperate page table, process id blocks, memory spaces, etc. Both lightweight (pthread) and heavyweight (process) get swapped with the kernel scheduler, however. And you should also remember that pages are only copied on write, so the actual memory footprint difference isn't all that great.
    • You say that limits on the number of threads is a bad thing. Of course this shows you lack both programming and system administration knowledge. If there was no cap on the number of threads, process or otherwise, it would be trivial to make a denial-of-service attack that would render the machine useless. And setting an arbitrary cap in the code is rather silly, as some architectures allow for more copies of the server running at the same time.
    • You say that having lots of options is a ``DUMB'' way to do things. I'd say the opposite. Having software choose your options for you makes an idiot out of the user. Configurability of almost everything is one of the greatest strengths of UNIX software. While YOU may not think that some option is necessary, it may make a huge difference in functionality for someone else. Apache has good defaults for the typical small web server. It has configurability if you need high performance or very-low footprint.
    Tell you what, rather than complain about the Apache Group's informed decision to NOT switch to pthreads, why don't you try to port to pthreads yourself? If the benefits are as great as you say, then I'm sure that your product would be a great asset to the community. I think that the Apache group has done a great job, that you overestimate the benefits of pthreads, and that the loss of portability by switching Apache to pthreads would be heartbreaking.
    • Building Linux 2.2.2 with gcc 2.7 instead of egcs 1.1.x
    • Could/should have upgraded to glibc 2.1
    • Aforementioned de-tuning of SAMBA
    • I'd like to see the config file for the kernel build and all the system, samba and apache logs.
    • Did they bring all the utils up to the required versions for kernel 2.2.2?
  • On top of inetd, what if they used tcpd as well? Or the "KeepAlive Off" in apache's config? They don't specify the ServerType parameter, but in my httpd.conf, the comments say the default is Off. Talk about slowdown, one request per connection each having to be thread through inetd (and possibly tcpd)...

    One thing that's bothered me through all hype about Linux sucking, these sort of "studies," etc: why are they always run by people who have no clue about the things they want to portray that they are experts on? Sure there isn't much to NT, click some Next buttons through wizards, and voila. So they apply that same sort of mentality to Linux, either taking a bare RedHat (or other distribution), or minimal customization. (The recompiling kernel causes you to muck up the entire system beyond recognition bit, my guess is they have no clue about bootable floppies, configuring LILO to have two kernel images for fallback, etc).

    What about the 960MB memory thing? Just a matter of telling LILO append="mem=1024M" ? I know it freaked when I put in 96MB the first time, only seeing 64MB.

    As others have said, the posts the the newsgroup contained some major flaws, not enough details, etc. That certainly would turn off many potential replies.

    Microsoft sponsoring them? Wouldn't their credibilty be higher if sponsors were NOT the manufacturers of the products they are testing? To me that's a major problem. For respect, a study should be balanced and unbiased.

    In conclusion, they are lunatics. Plain, simple.
  • What are they supposed to do, take the money MS gave them for the study and spend it all on a mediocre sysadmin? Where's the fun in that? Look at what they have accomplished though. They get money from MS, they say NT kicks ass, MS is happy and may continue doing business with them in the future, their summary will be plastered everywhere (boy, that NT graph is higher, it must be tons better than this Linux thing), they get more and more attention. Very few PHBs these studies target read the details, know what parameters to Samba/Apache do, etc. And all these organizations continue spreading FUD studies...

    Ah what a world we live in.
  • by deicide ( 195 ) on Tuesday April 13, 1999 @07:51PM (#1935173)
    Apache collapsed after 250 threads on a Quad 400Mhz Xeon? Something is definately screwy there..

    I have average close to 60-70 Apache threads running as a regular load on Pentium-120 with 64megs of ram without any problems. Most of those are database-generated, rather than plain file GETs. Someone has been either drinking or got paid some dough..
  • Because the bug was fixed in 2.2.3 which was out when they did the tests.
  • > Used 1024 MB of RAM (set maxmem=1024 in boot.ini)

    Maybe it's just me, but the fact that they went to the trouble of editing the boot.ini but not the lilo.conf is suspicious. Is mem=1024M really that hard? I'm quite certain the feature is documented.
  • If a Linux kernel can only see 970 MB RAM, it's been misconfigured. To make it see more, there's a source file that has to be edited - but why isn't it an option selectable from 'make config'? SMP is... maybe this'll motivate the change.
  • > The reason any truely seasoned IT professional
    > scoffs at this document is because of the
    > source.

    True, but I notice you qualified that observation with "seasoned." Alas, these same seasoned systems folks aren't making many purchasing decisions and are kept in the back room where their views on reality don't embarass the suits. (speaking from experience here...)

    I've encountered too many people mentally conditioned by Pravda who will discount any all other studies - no matter how technically solid - in favour of the ones - no matter how technically soft - that support their prejudices. (Sadly, MS apparatchiks are not the only ones guilty of this).
  • Oh man, I feel so enlightened by their wonderful studies!

    I mean wow! NT/Compaq blows away a UE450 worse than it blows away Linux?!? I guess I should stop wasting my time with this old hat Unix junk and get with the program and join the winning team! You can be sure that Real Soon Now NT will be far more reliable and scalable than any Unix system and I will be straight out of a job if I don't embrace the New Technology and join the marching ranks of brave new world techies towards progress and bliss.

    Now ... I need to modify/recompile rdist to use ssh instead of rsh on some intranet servers. It was so easy on Linux and Solaris so it must be a snap on NT eh? I wanna do this on NT so I can turf all those silly Sun 450 boxen? They're obviously useless as servers.
  • by drwiii ( 434 )
    I see M$ is still up to their old tricks. Oh well. It'll be nice in a few years when they're not even around anymore.
  • There is one question: why Linux should have that? What study (other than very NT-biased one, made by Russinovich) are those requirements based on?
  • ...but I find it amazing that you can defend MS with a straight face, especially considering that one of the posts on your /.-lookalike homepage is about how you had to fight with MS tech support to fix your web server...


  • by Skyshadow ( 508 ) on Tuesday April 13, 1999 @07:52PM (#1935182) Homepage
    "The Linux kernal limited itself to 970 megs of RAM"... Say WHAT?

    Really; the Winbox had most of its services shut off, while the Linbox was running SMB, NFS, etc. My guess is that they were probably hitting those other services while they were taking the numbers.

    Besides, this runs contrary to every other (non-MS paid-for) study I've seen. Mayhaps someone should do some independent verification. Be sure check if the Windows numbers were a "demo".

    Hey, they lied to Justice; why wouldn't they lie to us?


  • Well, 2.2.2 *was* listed as a stable kernel after all, so why would they have any reason to expect otherwise?
  • Here is an SMB test []on a small machine.

    Here is an SMB test [] on a large machine.

    In general there are some areas where Linux lags NT. IIS, for example, outperforms Linux on static page displays because it has a page cache and does not have to always cross the user-kernel boundary to fetch the page (system calls DO have a cost, even though 2.2 sped up the open() call considerably with the dcache). And it may very well be that a well-tuned ultra-high-end NT machine will beat a well-tuned Linux machine at file serving, given that NT will support the full 4gb of memory while Linux only supports 2gb of memory. But this "test" was not such a test -- it compared a well-tuned NT machine against a totally untuned Linux machine.

    And of course I'll point out that on tests on more modest hardware, like this [], Linux blows away NT handily. To be fair, that Smart Reseller test was just as biased in its own way as this joke test we're talking about... Smart Reseller chose a machine that's too small for NT to comfortably stretch its legs, albeit that the machine they chose is rather typical of small office web servers.


  • Hard to get newsgroup support for that sort of stuff? You bet!

    I know that I had no incentive to go dig this "will @ whistlingfish" out of his hole. I couldn't make heads nor tails of that posting when it was new, and there's too many other postings to reply to where people actually give useful information about their problem for me to bother with something like that.

    -- Eric
  • It's a little late to sponsor such a contest. It takes a week to ship something via motor freight from either coast to Chicago. Believe me, a 200-pound server is *NOT* shipped via Federal Express Overnight Air!

    -- Eric
  • 1) As someone pointed out to me via EMAIL, Apache actually opens most of those file handles prior to forking. Thus most of them are shared between the various Apache processes. In actuality, you'll eat up at least two file handles for each Apache process.

    2) The 2.2 kernel defaults to 4096 file handles, as vs. the 1024 default for the 2.0 kernel, so it's unlikely that he was running out of file handles.

    Still, obviously he did something wrong, because Apache usually does not collapse like that. It simply degrades gracefully, assuming max_clients is set so that you don't thrash the machine to death (and his message says he wasn't thrashing). See what happened when the Slashdot Effect hit the Linux Counter... once he brought down his max, it simply got slow but kept chugging out the requests. Puzzling. Without access to the server logs and httpd.conf files, it's unlikely we'll ever know what or how he did it, though.

    00 Eric
  • I agree, it is hard to believe.

    And: These people are *LIARS*. They say the posted messages asking for help on the Linux newsgroups. There are *NO* messages from the mindcraft domain anywhere on the Linux newsgroups. So I did DejaNews searches of "performance tuning", "performance tune", "kernel tuning", "kernel 2.2 tuning", between January 1 1999 and today, and examined the results to see if there were any messages that may have been by MindCraft researchers (i.e., that were referring to performance problems with a large-memory machine). There were *NONE*. Zero. Zilch. Which means that if they did ask any performance tuning questions, they did not use those words in the message.

    Anyhow: VA Research already loaned a quad-processor Xeon machine to PC-Week and it blew away NT 4.0 in their SAMBA benchmarks. VA Research's quad-processor Xeon machine is the same machine that we sell, and the same machine that Penguin Computing sells (we all get them from Intel, and then dress them slightly differently once we get them, e.g. VA Research uses a Mylex RAID card while we and Penguin use ICP-Vortex RAID cards). So we already have the benchmark that shows that their SAMBA benchmark is full of ****. But that's not going to matter to pointy-haired bosses because they recognize only those reports and studies that say what they want to hear.

    Am I steamed? You bet! I *HATE* liars!

    -- Eric
  • that they had the Apache config set to allow up to 127 servers, and they had not raised file_max from 1024 to something decent. Do the math. Each Apache server has 8 file handles open just sitting there doing nothing. If it then goes to open a file to serve it, and file_max has been exceeded, guess what happens? Yep, Apache collapses!

    -- Eric
  • by Eric Green ( 627 ) on Tuesday April 13, 1999 @09:20PM (#1935190) Homepage
    Thanks. I just checked that out. It does appear that they asked a single question about Apache performance. I remember seeing that posting myself and blowing it off because there wasn't enough info to tell him anything and I didn't feel like going into the give-and-take to get enough info to do something. (I do enough of that supporting my own customers!). Now, in hindsight, knowing what he did not do to Linux, the answer is obvious: he was running out of file handles. Do the math. An inactive Apache server has 8 file handles open. 127 servers max * 8 = 1016. Default file_max is 1024 for Linux, of which 150 or so are usually open while the system is at a rest. Apache could not bind a socket to a file handle for incoming connections because there were no file handles. So Apache was basically deadlocked, waiting for file handles to come free so it could accept() the socket but it was already holding all the file handles!

    If there had been questions about general tuning of such a large system, that would have solved the problem because someone would have remembered about file_max. But one cryptic query that didn't give enough information to get help does not an honest effort make.

    Anyhow: I guess I have to post a partial retraction. They did post a *SINGLE* query to the net.

    -- Eric
  • Novell reacts to Mindcraft Benchmarks of Novell 5 vs NT4. antage/nw5/nw5-mindcraftcheck.html []

    Mindcraft admits that Microsoft commissioned the original report.
    http://www.mind svr.html []

    Do we see a trend here?
  • Sorry, whats the full URL? I can't seem to find it on their site.
  • I've heard that NT has a 300MB file cache limit that is supposedly well documented. PC Week did a Samba benchmark with NT vs. Linux and their conclusions were so bad (for NT) they didn't even publish them. This was because NT would not use over 300MB for its file cache while Linux would use 900+MB.

    How did Mindcraft get around this, or was it actually documented in their report? Heh, I guess I should probably read the entire thing first huh? Naa!

    Also, after some research on why Apache performance dropped considerably at one point, it does look like they hit the 1024 file descriptor limit. Alan Cox has a patch for 2.2.x that brings this up to 10,000+ and theoretically millions. Check the recent Kernel Traffic mailing list for details. Did they do any research at all for this report, I mean come on!
  • Yup, and that extra cash you could probably hire a much better Linux sysadmin ;)
  • Yup, and that extra cache you could probably hire a much better Linux sysadmin ;)
  • What we really need is a comparisions of two systems with identical hardware tuned and configured by memebrs of their respective camp.
    ie a Windows NT 4.0 server running IIS setup by microsoft employees against a Linux 2.2.2 server running Apache setup by Redhat? people. But we all know that MS would never allow such an unbias test to occur under their rule of FUD.
  • Posted by the order of His Majesty:

    I've heard that NT isn't anywhere near ready for Gigabit anything. Reading in NetworkWorld about some network show, they mentioned how they talked with several gigabit ethernet vendors who where very unimpressed with WinNT's throughput - and commented that NT had to be specially modified to sustain 400KBytes/sec.
    I haven't ever tryed NT with gigabit ethernet, but it doesn't surprise me...
  • Posted by mithalas:

    960meg 4 CPU's. Hmm. I want to know why any web server is going to need (not to say that you don't want it) that much ram, or even 4 processors. If we all set back and think about if you really need that much for your server to perform then you probly are running a pretty crappy os. Granted NT has it's place and GNU/Linux (no flames please) has it's. Personally I would like to see a more down to earth server tested. How many web sites out there run 4 CPU 1gig servers. I say take a poll (running x86 based CPU's) of 50 of the largest site, 50 of the middle sites, 50 of the smallest sites and get what they run on (i.e. HW). They come up with an average take that average and put it together then install your os's of choice with experts from all sides to configure the os's. Then run the tests again. You'll probly see that you end up with Linux winning. From personal experience NT is no easier to configure than Linux maybe get installed initially but not configured. An untrained monkey could install it, but you better have time and experience to make it work more that 2 weeks for you with out a crash. When your lucky you get a whole month.

  • Posted by mithalas:

    Please show me on an official test NON BIAS that with that entry nt will only use 1gb. Plus you should checkout what they did with NW5 and the interesting ways of configuring hardware.

  • Posted by llogiq:

    Is this topic really worth to write dozens of postings about? (Guess why we all are posting here...) Well, even if NT was twice faster (which it is not), would anyone matter?
    linux just started to gain sympathy even in business - some (who said they would not risk using a hacker system some months ago) already say they can't afford proprietary systems like M$ NT or else - now.
    Another thing is that it's nearly impossible for a bunch of well-paid M$-coders to improve a server app like the thousands of (free) linux coders do.
  • Posted by Nr9:

    i think mindcraft is part of ms, all they do is advocate ms products
  • Posted by The Chicken of Darkness:

    Let's see here... they were using a ZD program to test SMB performance? To a certain degree NT would have an edge, given MS made the SMB protocol and since it is a Ziff-Davis program. The headline doesn't even specify "fileserver". I'm sure Apache would shread NT as a netserver, performance wise and in up-time.
  • Posted by LOTHAR, of the Hill People:

    I supect that the people conducting the test were not proficient linux users/administrators. The Linux installation followed defalut settings except for Kernal automounting. The fact that

    "NFS file system support = yes "

    makes me wonder how the drives were partitioned (RAID configuration as well)

    The test also mentioned

    "The Linux kernel limited itself to use only 960 MB of RAM"

    Which is a subject dicussed here last week

    The NT installation was not default, the Registry was directly changed.

    "Server set to maximize throughput for file sharing"
    "Set registry entries: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es:
    Tcpip\Parameters\Tcpwindowsize = 65535"

    "Used the affinity tool ..."

    I do not consider the test valid.
    The NT installation was tuned (if even slightly)
    The Linux installation was not.

    We need o third party with a though understanding of both OS's to administer an accurate test.
  • Posted by the order of His Majesty:

    Not regarding the web server, but I noticed that they set NT's pagefile to 1GB and didn't mention Linux's swap configuration at all.
    Taking everything that they did to misconfigure Linux it doesn't surprise me that it doesn't perform spectacularly, it's like turning off L1 and L2 cache, turning off shadow ram, and setting 4 wait states to a PentiumPro 200 - it turns into a 386!
    Stuff like this ought to let people see what Microsoft's game plan really is (assuming they even have one, after reading the Halloween docs (: )
  • by sjames ( 1099 )

    If they had availed themselves of a Linux expert (or gotten Linux pre-installed by a good VAR), they would have tuned the kernel to at least use 2G of RAM. All you have to do is change __PAGE_OFFSET to 0x80000000.

  • Compare and contrast:

    NT 4.0 is 2.5 times faster than Linux as a File Server and 3.7 times faster as a Web Server

    with ZDNet's findings on the subject of the same benchmarks of Linux vs NT benches.

    The same thing could have happened if smp were "Accidentally" left out of the linux machine's kernel.

    I doubt that any amount of "tuning" would generate this type of difference. With apache, perhaps realtime ip resolution would do it, but I don't see how that would figure in with Samba.

  • by antv ( 1425 ) on Tuesday April 13, 1999 @09:18PM (#1935238)
    Apache 1.3.4 Configuration
    Set OPTIM = "-04 -m486" before compiling

    on 4 x 400 MHz Pentium II Xeon

    Samba 2.0.1 Configuration
    wide links = no

    That creates a bottleneck in Samba performance, see here []

    the following processes were running ... (kswapd), /sbin/kerneld,syslogd,
    not sure if that means something, but why they run kerneld with 2.2 kernel ?

    On NT side:

    Tcpip\Parameters\Tcpwindowsize = 65535
    that makes huge boost on network performance, but only on local network where packets don't get lost

    Set Logging - "Next Log Time Period" = "When file size reaches 100 MB"
    Logs on the F: drive (RAID) along with the WebBench data files So basically server does much less logging than Apache - and since it's many small requests, and since Apache writes logs on a non-RAID disk all together it'll be a big bottleneck

    Anyone noted anything else wrong with this benchmark ?
    From all my experience it looks like pure crap

    P.S. Why they needed NFS ? inetd ?

  • The problem is that Linux needs kernel patches for more filehandles, >960MB memory, etc. etc. It's very sad IMO.... a sysadmin should NOT be expected to have to apply patches.... menuconfig options maybe, but not patches.....

    People complain about tests like this and DH Brown, but really only somewhat out-of-box solutions should be tested.
  • "We work with you to define the goals you want to achieve via testing."

    As their main web page states, they define the goals before they test. The only goal was to say NT runs faster than Linux. I've never heard of this company before. I now know why.
  • by mikpos ( 2397 ) on Tuesday April 13, 1999 @10:56PM (#1935258) Homepage
    -m486 is pretty standard. I would be very surprised to see a performance gain of more than about 1% between -m686 or similar and -m486.

    wide links=no has been explained on other threads. It certainly slowed down performance by an unreal amount. This is a paranoia security measure that apparently some admins would use. I don't know that it's fair to assume that they put this in solely to skew the results.

    As for kerneld, inetd, NFS, etc.: all right it's unnecessary, but will use under 1MB of RAM and under 0.1% CPU most likely. I don't see this as an issue.

    My best guesses for the apalling results are something like this:
    - the wide links=no thing. NT doesn't have to worry about symlinks. I think this is unnecessary in pretty darn well every case. This could either be intentional malicious intent for the pro-NT side, or inexperience/a mistake. Either way, it would be nice to show some number with this turned on.
    - pure speculation here, but they may have set up Apache to do real-time hostname lookups. This is an absolute no-no for any serious server. Again, possibly inexperience or a mistake.
    - the >512MB RAM Linux bug. I've heard horror stories, and I've read people with no problem at all. Also, I believe this was a problem with PIIs only, and they were using Xeons in this report. Who knows.

    Anyway it appears to be a bad combination of very silly yet somewhat understandable (for newbies) software misconfigurations, and some bad choices in hardware. Which brings me to another point: quad Xeon for a server? My K6-166 could handle a few thousand hits a second I'm fairly certain. Adding more processors will only slow things down when you're dealing with file serving.

    It's hard to say whether or not they did this intentionally. It's fairly obvious that they didn't know exactly what was going on with Linux. I'd say Microsoft has a list of hardware that they know works well, and when these people asked for sponsorship, some Microsoft people said "OK here's some hardware that we know works. BTW doing this and this and this might help out your performance". Not to say that Microsoft went out of its way to hurt Linux, but they probably know what works best on their own systems.
  • by maugt ( 3520 ) on Tuesday April 13, 1999 @08:03PM (#1935275) Homepage
    More than something screwy. You can't use more than 100 threads on IIS 4. It uses Microsoft's Transaction Server thread resource pooling to do thread management, and MTS is internally limited to never handing out more than 100 threads. So basically at least some of it is incorrect.
  • echo 8192 > /proc/sys/fs/file-max

    kernel patches my ass..

  • If you look at the certification section it is "mindcraft" admits that MS paid them to do this benchmark. Frankly, I can't see how that could possibly make an objective benchmark.

    If you look at the other whitepapers that this company has done it is very evident that they are highly biased towards NT.
    Just look at the SMP Ultra Sparc machine getting
    beat 4x over by some NT PC.

    Over at one of the Samba team members gives other information

  • Sure - this test probably is subjective. Moreover, a if anybody from "our side" runs a similar test, that would probably be subjective as well. "We" don't trust "them" - why should "they" trust "us"?

    The only way to work this out (apparently) is if representatives of "both sides" came together and defined parameters of a test. They should specify things such as what hardware the server should run, what services should be running (and more stuff that you sysadmins know about for a living). The group should also lay down specifications on how this should be measured. Then, these specifications should be publically reviewed, and revised if necessary.

    Then there should be a test for extremists on both sides to tune the OS of their choice to achieve the best possible performance on the specified platform.

    If a vendor (say, Dell, Compaq, IBM, or one of the other companies that now presumably deliver both OS's) could put up a number of similar boxes and have a "tweaking contest" on a software convention or whatever, that would be even better. However, I cannot believe for a second that Microsoft would have the balls to allow such a shoot-out...

  • Check out Novell's complaint [] about a similar study of Mindcraft concerning Novel NetWare 5 and NT. Maybe some of Novell's complaints about unprofessional methods also apply to the test Mindcraft did on Linux.


  • I just wanted to mention that I've recently set up a Dell Poweredge server with PERC RAID controllers, 512MB RAM, Dual processor Pentium II 450s, and Linux 2.2.5.

    This machine serves over 100 clients, and it functions as a primary domain controller running Samba 2.0.3. It has worked phenomenally for the all NT network my client has, and it also serves Email, IMAP, POP3, and is a web server for all the users here. It also does a myriad of other tasks, and the load never even hits 1. And this thing is a less powerful machine than they tested, but it can serve over 100 clients with ease.

    I don't know where they cooked up the figures they have, but this server gets plenty of use [], and it's never buckled or given me any problems setting it up.

    By the way, does anyone know where I could go to find out how to increase the maximum number of files, and/or to further tune this machine, because I've had a couple of small problems with running out of file handles for the whole system. Anyone have any suggestions for a site I could go to?

  • by edgy ( 5399 ) on Tuesday April 13, 1999 @07:45PM (#1935302)
    Yeah, this study is sponsored by Microsoft, if you read the fine print:

    Mindcraft Certification

    Mindcraft, Inc. conducted the performance tests described in this report between March 10 and March 13, 1999. Microsoft Corporation sponsored the testing reported herein.

    Looks like you can buy anything you want with enough money. It doesn't make it a true indication of a real-world situation.

    I think that there's enough evidence to the contrary already out there, and this will only serve to discredit Mindcraft.

  • by edgy ( 5399 ) on Tuesday April 13, 1999 @07:53PM (#1935303)
    According to a posting [] on Linux Today by Jeremy Alison of the Samba Team, it seems that the Mindcraft study crippled the Samba server in the tests:

    From Andrew Trigell (original Author of Samba):

    They set "widelinks = no" now I wonder why they did that :)

    In case you haven't guessed, that will lower the performance enormously. It adds 3 chdir() calls and 3 getwd() calls to every filename lookup. That will especially hurt on a SMP system.

  • I think it is important that the Linux community contests these results, as they certainely seem skewed, but I hope we can do it calmly and with dignity.

    With the amount of equipment involved, I believe it would take VA Research or a company of that ilk to try a similar test.
    (Could we do a test on a less expensive set of equipment?)

    If the results of the original report are not reproducible, then what they did is bad science. I think that trying to reproduce the results, but with people who know how to optimize the Linux set-up (and be fair and optimize the Windows set-up too) would do much more for how Linux is preceived than by us doing a lot of name calling and questioning the motives of Mindcraft.

    The point is, if everybody else who does the test gets completely different results, that will be all we ever need to say.

    Let's respond in a way befitting the wonderful operating system that Linux is.
    also provides a link to an article about a similar incident Novell had with Mindcraft. tcheck.html

    The one on the novell website is epecially informative. It is the exact same situation in which results published by Ziff-Davis show NT at a disadvantage, but when Mindcraft does the test, NT comes out ahead!
  • >Linux 2.2.x has the same default window size. The memory limit is hard coded in Linux, unless you apply some patches.

    I'd beg to differ here - passing mem=000M via lilo will cause linux to address that memory - provided the system has that much addressible space.

  • As I read the configuration, the NIC configuration really jumped out at me. They explicitly set the NICs to 100Mbit, and set them to bind to one CPU each. Now I don't know much about the EEPro 10/100, but will it automatically use 100Mbit on Linux without explicitly using it? As well, if they didn't bind the NICs to each CPU in Linux, won't you get a bottleneck as all 4 cards fight for the CPUs?

    I also noticed that the NT box was tuned by someone who is obviously very well versed in NT internals. The Linux box appears to be an out-of-the-box install, with the proper settings turned on. They didn't even include the /etc/conf.modules and kernel boot line.

    One more thing, notice the cache TTL and max number of open files for IIS. Both are set to very high settings, which will ensure that files will not be paged out during the runs.
  • This is the introduction to the "general performance tips" section of the apache manual.


    Apache Performance Notes

    Author: Dean Gaudet


    Apache is a general webserver, which is designed to be correct first, and fast second. Even so, it's performance is quite satisfactory. Most sites have less than 10Mbits of outgoing bandwidth, which Apache can fill using only a low end Pentium-based webserver. In practice sites with more bandwidth require more than one machine to fill the bandwidth due to other constraints (such as CGI or database transaction overhead). For these reasons the development focus has been mostly on correctness and configurability.

    Unfortunately many folks overlook these facts and cite raw performance numbers as if they are some indication of the quality of a web server product. There is a bare minimum performance that is acceptable, beyond that extra speed only caters to a much smaller segment of the market. But in order to avoid this hurdle to the acceptance of Apache in some markets, effort was put into Apache 1.3 to bring performance up to a point where the difference with other high-end webservers is minimal.
  • by Matt Welsh ( 11289 ) on Tuesday April 13, 1999 @11:24PM (#1935362) Homepage
    Okay, folks. So we have a bit of egg on our face for this one, because nobody (to my knowledge) has really stepped forward with large-server Linux benchmarks which demonstrate anything differently. It may be that Mindcraft royally screwed up, or it might be that Linux really is slower than NT for a certain set of benchmarks -- the truth is more likely a combination of these factors.

    If Linux is going to be treated as a serious operating system by the majority of the IT community, it's going to have to step up to the plate and demonstrate scalability and performance which does rival NT server in this area. Most of our knowledge about Linux-vs-NT performance is somewhat anecdotal -- we haven't really "put our money where our mouth is" and shown objectively that Linux can outperform NT in these areas.

    Rather than dismissing this study as FUD, I think we could learn a few valuable lessons from this study. We should seek to understand why the benchmark results weren't as great as we would have liked. We should fix any obvious bugs or misfeatures in Samba, Apache, and the Linux kernel which stood in the way of higher performance. And we should stive to improve the entire system to make it be a true NT rival.

    We have a lot going for us. First of all, we can innovate at a much more rapid pace than Microsoft -- so hopefully within just a few short months (and I'm being pessimistic!) we could demonstrate a high-performance Linux file and Web server which kicks NT's butt all over the place.

    Nobody said building a high-performance, scalable Internet server operating system was easy. Let's get to it!

    Matt Welsh,
  • There are several problems with the web server results.

    Pretty much all high-traffic sites have dynamic content. They are not limited by the kind of web server performance these systems measure, but by the technology you use for generating the dynamic content (Perl, CGI, Servlets, databases, etc.).

    Throughput of more than a few megabits per second is also pretty academic, at least for Internet sites and most intranet sites, simply because the network can't handle more than that.

    Furthermore, Microsoft has been foremost in doing funny things with their TCP/IP implementation, both on their servers and on their clients, to look better on these kinds of benchmarks. If you look at the TCP/IP specs, it's actually impossible to achieve the kinds of hit rates they claim with a compliant implementation. Microsoft also seems to have done other things with timing and sequence in the past that made their systems look good and other systems trying to interoperate with them look bad (accident? you tell me...). So, even if NT performs better with 95/98 clients, that doesn't necessarily imply that NT is a more efficient system.

    Another problem with their study is that it makes little sense to buy a four processor Xeon machine to run web sites with Linux. Four separate Linux machines are going to be more robust, easier to install, easier to maintain, perform better, and cost less. Of course, with Windows NT, because of the hassles of administering machines and because of the cost of the various software licenses involved, people may end up having to buy expensive, high-end SMP machines. I view that as a strike against NT.

    They also don't seem to have tested systems where multiple, different server processes need to run on the same machine (web server, database, etc.). NT seems to perform poorly in those situations.

    I can't comment as much about the Samba results. What I do know is that the Microsoft SMB servers we use seem to perform very poorly compared to the Samba servers on Solaris in practice. These are both professionally installed and maintained systems on high end hardware with hundreds of clients.

    Altogether, their study strikes me as biased and meaningless. To me, NT isn't even in the running for building large, high-performance web services. For the performance characteristics and functionality that matter on real web servers, a Linux or BSD server farm is a cost effective way to go.

  • There are not many things to shoot at.

    Linux does not appear to have done well. How does this test translate into a real world situation? Isn't Slashdot running on a lesser machine than the test server? And cranking nicely with perl and Apache doing the dirtywork?

    Someone has already mentioned the 960 MB self imposed Linux RAM use limit... Looks like a typo more than anything else.

    Pretty graphs that an MBA would appreciate looking at.

    The testbed was purely Win95 and Win98 machines running Microsoft TCP/IP - how this translates into 'extend and embrace' is interesting.

    The one major anti-Linux thing said was that documentation and support were not forthcoming for the kernel and Apache, but the Samba docs were decent. Is this because Samba is a 'clone' of a Microsoft product?

    Just how intimidating is the lack of formal documentation, for an enterprise level web server? After all, the people responsible for handling such an animal would surely have readily available access to the 'routine' expertise, and quirks and oddities are not something even Microsoft documents eagerly.

    Ah well.. Back to time off. :)
  • Well, it says that the NT servers peaked at *112* clients during the SMB test. Looking at the street value of the system, a 20 user pack is about $2000 CDN. 112 users would cost a company *over $10000 CDN* for just the software alone!

    This is just plain stupid. :)

  • by jerodd ( 13818 ) on Tuesday April 13, 1999 @09:01PM (#1935408) Homepage
    An AC posted:

    The net posts asking for help that are mentioned in the white paper appear to have been most likely made under the pseudonym:

    Use DejaNews.

    No-one seems to have done that and talked about it. I did; h ere's the relevant link [] that lists all the messages from this guy on Usenet. Take a look at them and post what you think about them. It seems to me he hit a strange, obscure bug in GNU, Linux, or Apache, and it might have something to do with network adapter or SCSI adapter problems.

  • There are tuning param's in the Kernal for this. I know because at my last job we played with a Linux box with 1 gig of memory. We noticed that it was doing lots of IO when it was only using 400Meg of memory. E-mail to Linus got a resonce with some var to play with in real time (writing to files in /proc) that fixed things up nicely.
    These tricks probably need to be documented somewhere.

    As Linux becomes used for bigger jobs in buisness a high quality Kernal Tuning HOWTO would be good. Even if it were a published book.

  • by Septor ( 16350 ) on Tuesday April 13, 1999 @08:39PM (#1935443) Homepage
    So they have 4 network cards in that nice little Dell machine, and they specifically mention "Used the affinity tool to bind one NIC to each CPU", so my question is whether they even bothered to use the other 3 NICs under Linux.

    It seems to me that Linux with one network card doing only 2.5 times under NT with 4 network cards sounds about right. Give Linux 4 network cards and you get performace that easily blows NT away.

    To use multiply NICs you have to use the network driver as a module I believe, did they bother to do that? I can't imagine Red Hat's installer "How many network cards do you have, but then again maybe it does, I'm a Slackware kinda person...
  • by woods ( 17108 ) on Tuesday April 13, 1999 @08:55PM (#1935445) Homepage
    The beowulf newsgroup had a couple short threads a couple months ago about consistently abyssmal performance on redhat 5.2 SMP machines running 2.x with > 512 MB of RAM. The two threads [ one [], two []] deal with users who had horrendous performance problems with their new machines (both running 2.2.2, the same kernel as in the report) when they used more than 512 MB of ram, but the performance jumped right back up when they used 512 or less. Check out the articles to see how bad the performance was; it's pretty surprising, and presents an interesting opportunity for detractors of linux:

    Linux definitely has some hardware/kernel combinations that would seem OK by design on paper, but exhibit peculiar behavior in practice, especially with SMP. I wouldn't rule out the possibility of the testers (or financial backers) hand-picking kernels/hardware configurations that could affect results while seeming perfectly viable to the layman.

    It seems very likely to me that if Microsoft did not outwardly donate the hardware to the testing company, they at least made suggestions on its configuration. The open nature of linux development and bug disclosure could easily be used by companies wishing to stage biased demonstrations; Microsoft almost certainly does a thorough job tracking linux kernel development and bug reports.

    -- Scott
  • Now, here's the complete reply they got.

    In article , wrote:

    > We're considering using Linux + Apache as a web server.

    Excellent choice.

    > The hardware is a 4-processor 400 MHz (Xeon) server with 1GB of ram, a RAID
    > controller, and six disks. We have Redhat 5.2 installed and compiled an SMP
    > version of the 2.2.2 Linux kernel.

    I hope it's not too late to change your hardware, because your box is a
    complete waste of money. SMP gives you *nothing* with regards to web
    serving, and it makes your OS flaky as all hell. The RAM is nice, but the
    processor speed is overkill and having 4 of them is just plain wasteful. The
    network card would saturate completely before you even came remotely close to
    using up the resources of even a single P2 200Mhz.

    > For the web server we used the latest 2.0.3 version of Apache.

    Stick with what works. I'd use 1.3.4, as it's generally considered more
    'stable'. You don't *always* want to be "bleeding edge".

    > The scenario: we're bangin' on this web server with a bunch of clients
    > to try and get a handle on its capacity. Simple static HTML requests,
    > no heavy CGI yet.

    Another suggestion: mod_php3. I guarantee that if you ever see large
    amounts of traffic, CGI will rapidly become your worst nightmare. There are
    a variety of _internal_ Apache modules that give you everthing CGI can do,
    but faster better and more efficiently. Keep in mind that CGI requires you
    to fork() another process to handle each web request, which can very quickly
    run you up against the process limit on a heavily loaded machine. PHP3 is a
    PERL-like, C-like programming language that's relatively lightweight. You
    can download the sources from, where they also provide
    instructions on how to build it into Apache.

    > The problem: the server performs well, delivering in excess of 1300
    > HTTP GET requests per second. But then performance just drops WAAAY
    > off, like down to 70 connections per second. We're not swapping,
    > the network isn't saturated (4 x 100Mbit nets), disks are hardly used,
    > but the system is just crawling. Neither vmstat nor top show anything
    > unusual. No error messages in the web server. Its puzzling.

    Try various flags to netstat, see what they say. If you could post the
    details of several different commands that would be helpful in diagnosing the

    > Any ideas? Any tips, suggestions, or pointers would be appreciated.
    > Thanks!

    What type of network load do you expect to see on your box in the long run?
    What type of applications does it need to run (other than Apache and its
    modules)? I know it's blasphemy in this group, but if you're just doing "raw"
    webserving (no database interaction) you'd see *much* better performance with
    some variant of BSD (for example, FreeBSD from If
    you're more into running a K-rAd k00l website with lots of doo-dads and gizmos
    (and don't care about performance under heavy load), then Linux is your best

    -Bill Clark

  • Anybody know what this does:

    Used the affinity tool to bind one NIC to each CPU
    (ftp://ftp.microsof []

    If this does what I THINK it does it would explain a lot.
  • Say I put /home/samba on as a public SMB share. Also say that there is a symlink called "root" in this directory that points to /. Then someone accessing this share can "cd root" to access the whole filesystem. Setting "wide links = no" prevents this by causing samba to check if symlinks are outside the share before following them. However, NT4 doesn't have symlinks at all anyway, so in comparing the two, it is acceptable to just delete all symlinks from SMB shares (or at least all those that point outside the share).
  • Why is anyone surprised at this? Microsoft attacks anything that is a threat. Microsoft has always attacked a strong competitor with overwhelming market force. It will consistently produce more evidence of how poorly it's competitors' products 'truly' stands against its products.

    Their corporate methodology has been clear since their beginning. They assume the competition's strengths into their products, and then they state that their product is then the superior product. Their tactics have always been obvious and simplistic, and their tactics have also been very effective, until now.

    They face a quandary with Linux: how does a business compete with a product that has no specific vendor to attack? How do they compete with a product that is communistic in nature? It is more than their product's competitor; it is becoming their corporate nemesis. They cannot overwhelm something that has no boundaries, that is developed without regard for specific profit, and that has their own corporate policy as it's core design: 'Be the Borg.' -- take the best of your competition's strategies and products, and make it part of your own structure. Anyone who has followed the history of the computer's evolution will remember that Microsoft started as a forced progression of business policy into non-mainframe OS software development in the late 70's. Anyone who remembers the early days of the PC (or then known as microcomputer) will remember that the thought of 'licensed' software that was property of only the company that created it was laughed at initially as unworkable or unsustainable, but Microsoft succeeded in making that policy work. Microsoft grew rich on that one idea, and it is the reason that Microsoft has been able to achieve dominance in the OS arena.

    BUT, Linux has changed something important, and Linus probably didn't realize how important what he did was at that time, or what part of it was important. Linux by itself would never have had the possibility of competing against any dominant OS. It would have been another hacked OS that would never have left the collegiate world. It IS the Open-source licensing structure that has added the needed element to the software that has made it into an upheaval in software design methodologies. It's the Open-source piece that has turned a lot of heads due to its impact on the software industry. This is due to the fact that Linux (and through Linux, the Open-source licensing structure) is an evolutionary change in software design. Linux started as a free, cooperatively evolving OS that has returned the unstructured human element to the process of business software development. Sadly, this human element has always existed in the academic community, but died in the business community with the domination of Microsoft as the dominant business model in the software industry. The corporate structure that has grownup around Linux is just a natural reaction of capitalism to anything that has the ability to produce revenue, but Linux remains a communistic product by it's licensing structure. And that's a good thing for it; it's the only way it will be able to remain a strong and vibrant competitor of Microsoft for the long term.

    So, in the end, the analysis of Linux vs. Microsoft is a null argument. Microsoft cannot compete with a product that is not a product, but a movement. Linux is fundamentally restructuring corporate policy towards software development. I just hope Linux's impact will survive the greed that will try to control it's nature while the Open-source movement grows up.

    And I hope the 'Borg' in Microsoft can change it's ways so that it can allow another dominant player into the game without it having to feel the need to annihilate it.

    -- The violin is playing in the background for those who are listening to it too.
  • The time lost every time they had to reboot NT.

    'nuff said.
  • OK.. no one mentioned it that I've seen so far, but it kinda glared out at me: the fact that he mentions performance "in excess of 1300 req/sec"
    before it falls down.. then in the report, that shrinks somehow to 1000 req/sec??

    Kinda makes (me at least) ya go Hmmm...

    Anyway... I'm looking forward to results on the same kind of hardware, peaked by people who know how. Hope they get that re-test project going!
  • I'd like to see this sort of test with a third, control group thrown in for good measure, such as Solaris. I personally couldn't give two hoots about NT vs anything, it's not unix which is all I want on my systems, and I'm sure there's a lot of managers out there who think the same.

    NT and unix are too disparate for a balanced comparison, high-end Solaris vs Linux comparisons would offer a clearer perspective in the real world.
  • According to Alan Cox (and he should know) 2.2.2 has a known TCP flaw that is triggered when talking to NT servers. Apparently, this bug still exists.

    The Linux developers care about this issue, but not so many of them have NT running at home... :)
  • They claim that they contacted RedHat for help with configuring the kernel and RedHat wouldn't help. Makes me wonder... They also said they posted in various news groups and didn't receive any help. This goes against all of my experiences with the the newsgroups. This really smells of FUD... You have to give the boys from Redmond credit, they are very good at promoting their products and FUD.
  • by blach ( 25515 ) on Tuesday April 13, 1999 @07:56PM (#1935527)
    Look at the OS Configurations:

    For one, NT used 1Gb of ram will Linux used only 960MB. Surely they could have passed the parameter MEM=1024M to the kernel ...

    Additionally they tuned tcpwindowsize under NT to 65536, and adjusted buffers on the network card to 200 (from 32).

    They made no TCP/IP stack adjustments OR adjustments to the netcards under linux.

    Just look at the sections explaining the myriad of things they did to "tune" NT. Then look at linux. Enable NFS. The following daemons were run. blah blah. Didn't bother to work on anything.

  • From my meager understanding of web servers, it is not surprising that Apache came in a bit behind IIS when servering static pages. My experience is that for true static content many webservers like IIS and Lotus Go, are faster. Where apache gets major kudos is when running cgi, as the fork and
    exec of the script environment seems much better. Try running a good hunky perl program on each request and see what you get. :)

    The other thing is that although I am a big fan of linux on powerful machines, its greatest charm is that you can run it on a 486 with 16 megs of ram, and have a relatively well behaved web server that can stay up for 60+ days with no user intervention.
  • If the numbers are incorrect, the only thing that's
    going to convince me is an actual reproduction of the test,
    second best is technical information, which is what
    some posters are providing.

    Your own contribution ("I've never read a bigger pack of lies")
    doesn't tell me anything useful and only takes away credibility
    from the criticism posted by others. You are damaging this forum.

    Argue the facts, not the circumstance that the report serves Microsoft's interests.
  • by BryanClark ( 29840 ) on Tuesday April 13, 1999 @11:25PM (#1935559)
    The fact that Microsoft won't use their own servers on one of their own sites, shows how much they rely on their product. If an NT server isn't good enough to handle their own web services, why should it be good enough to handle mine?
  • by ADL ( 30081 )
    This one got the charts right:
    Linux Up Close: Time To Switch []
    See "RELATED LINKS": The Best Windows File Server: Links & Linux Is The Web Server's Choice

  • You have to edit a few lines in the kernel to get it to support more than 960 MB of physical memory.
    What I found interesting was that they apparently didn't make a separate swap partition for the linux box (they said 1 OS partition and 1 data partition)... hm...
  • If you read the article, you'll see that the NT box was crippled down to 1Gb.

    Windows NT Server 4.0 Configuration
    Windows NT Server 4.0 Enterprise Edition with Service Pack 4 installed
    Used 1024 MB of RAM (set maxmem=1024 in boot.ini)
    Server set to maximize throughput for file sharing
    Foreground application boost set to NONE
    Set registry entries:

    See what I mean? It used 1024Mb.
  • From Andrew Trigell (original Author of Samba):

    They set "widelinks = no" now I wonder why they did that :)

    My guess would be so that you would have a secure system, which is what 99% of admins not trying to rig benchmark results would arguably prefer.

    widelinks = yes gets you hit on the nose with a soggy newspaper, if you're an admin.
  • Wow! It seems like all of our true anonymouse cowards have come out of there holes.

    I must say I find it amazing how quick a lot of our fellow slashdotr's are to judge. I work on a production team for a major high tech company. A lot of our Database servers run NT, Anything that is doing security is running some flavor of Unix or Linux.

    Just today my mentor(Yes im an Intern, and 16 at that) was telling me we may have to add 3 BSDI servers to our network. I was floored! We already have about 5 diffrent OS's on our network(thats counting diffrent flovors/versions of unix and Linux)
    When i asked her why the answer was pretty simple. The program out grand high mucky mucks had dessignated for Ecommerce was optimized for use with BSDI. Now i have to learn yet another OS.

    I enjoy hte learning, its why im here, but I have learned one thing. SQL doesnt work well Linux boxes. NT4 has a few security holes that NT5 doesnt. You can fix those with SP4 but then our back end doesnt work.
    Ive learned how to set up security on a Unix box in such away that it will make the NT boxes more secure.
    I have also learned that most OS are really good at a specific task, and that Zealots will use that to the best of there advantage.

    Every OS i have used had some really positive points and some really good points. Linux is great, But i wouldnt want my grandmother using it. The MacOS is great when you want simplicity or are doing major graphics. Windows on the other hand is great if you want to do just day to day stuff. I would hate writting a term paper in Vi or Emacs. Heck i would even hate doing it in Pico! But Word works really well, and notepad can be the "quick and dirty" programmers best friend, especially with HTML stuff that you need NOW.

    Everything has a positive, everything has a negative. Lets not get to angry at those who are willing to actually admit they agree with somthing we dont. If everyone was like that this whole world would be like Kosovo.
  • Linux as we all know runs very well on single processor machines. It also runs well on 2 processor SMP systems. But above that the principle of diminishing marginal returns begins to kick in hard. NT simply makes better use of resources on SMP systems than linux currently does. This is not suprising since few linux devlopers own a 4-way SMP system whereas Microsoft can buy all they could ever want. This is a rigged study. Apple pulls similar tricks when they compare their G3 systems to x86 machines. If Mindcraft were to compare Linux and NT on a single or dual processor machine I'm sure the results would be quite different. As SMP systems become more affordable and commonplace I'm sure that linux will catch up and likely surpass NT in this area. Microsoft claims that NT will scale up to 32 processors but the truth is that it begins to decay rapidly above 4 processors and so the version which is supposed to work with more than 4 processors is rare and not commonly available. Mindcraft would do well to throw Solaris into this comparison, but they won't do that because NT would get it's clock cleaned.
  • I agree their methods and configuration must be questioned. In defense of Linux this month's Microsoft Certified Professional Magazine has a focus on Linux/Samba and NT (Front Cover). The person who wrote the artice works for SGI and is on the Samba Programming Team - All this in a PRO-Microsoft Mag. He even states that in their tests that NT 4.0 is faster when serving UP 16 Client requests but under HIGHER LOADS Samba/Linux performs better. THIS WAS IN BLACK and WHITE.

    Another thing about Mindcraft's testing was ...Only on a lightly loaded server, with 1 or 16 test systems, does Linux/Samba outperform Windows NT Server and then by only 26%....

    This is contrary to Statements made in the May MCP Magizine. I find that they both have their strengths and as a MCSE have seen NT systems that are properly configured perform their server duties well (with careful observation and maintenance). I have also seen Solaris/Linux/AIX all perform as well as or better in the same environments as NT.

    Just 2 Cents from yet ANOTHER MCSE & LINUX User (Bet you don't see that everyday...)
  • by Timbo ( 75953 ) on Tuesday April 13, 1999 @07:50PM (#1935646) Homepage
    Please submit any inconsistancies you see in this document (and if you don't see any, please shoot yourself in the head) to They are readying a response as we speak.

What is algebra, exactly? Is it one of those three-cornered things? -- J.M. Barrie