NT faster than Linux in tests 723
Mike_Miller writes "The lastest Mindcraft Study claims that Microsoft Windows NT Server is 2.5 times faster than Linux as a file server and 3.7 times faster as a web server. Their white paper shows that NT beats Linux on every test. " Anyone have a critique?
Critique? Sure. (Score:1)
Note the hardware (Score:1)
I'll bet if the test were repeated on a couple of single-processor boxes with standard IDE disks, the results would be very different.
I'd walk away from this test with the following conclusion: Linux needs more tuning for higher-end hardware.
Of course, note the spin of the article: If you don't read closely, it looks like NT is 2.5 times faster than Linux in some sort of overall sense.
Targeted benchmarks (Score:1)
I note that the tests & machine vary widely from previous Netware vs NT tests - why?
No mention of relative cost is made which is strange as cost/performance is a rather important factor (how much is NT with 140+ client licenses anyway?)
Why take a machine with 1GB of RAM - is this typical of the average PC server?
MS are simply hoping that media will simple report that 'NT is 3.5 times faster than Linux' as they assume (rightly?) that is all corporates will remember. The only answer is to ignore the Mindcraft study and keep publishing (carefully selected of course) benchmarks showing Linux speed. Thats all that counts in the end.
Mindcraft's post to comp.infosystems.servers.unix (Score:1)
I hope it's not too late to change your hardware, because your box is a
complete waste of money. SMP gives you *nothing* with regards to web
serving, and it makes your OS flaky as all hell. The RAM is nice, but the
processor speed is overkill and having 4 of them is just plain wasteful. The
network card would saturate completely before you even came remotely close to
using up the resources of even a single P2 200Mhz.
DID ANYBODY GO TO THE MAIN PAGE? (Score:1)
They are just killing their own credibility, not that of Linux.
Apache and MMAP - Linux and slow start (Score:1)
I do know Linux supports the slowstart standard on tcp/ip. 2.2.x has this, does NT? If not, is this what does it?
Trever
They disabled keepalives on Apache, but not IIS (Score:1)
Set OPTIM = "-04 -m486" before compiling
Set EXTRA_CFLAGS=-DHARD_SERVER_LIMIT=500
They set a HRD LIMIT of 500 connections and then mourn Apache crapping out at 1000 connections? Puhleeze!
Wankers indeed!
ehh... reading problems?? (Score:1)
http://www.gcs.bc.ca/bem/editorials/nts4rhlinux
this page says exactly the reverse story, i.e. RH5.2 is faster than NT4 v/sp4
maybe NT IS faster than Linux (Score:1)
"Shut the fuck up! NEXT!"
-Coward
Repeating History (OS/2 experience w/MS) (Score:1)
Deja Vu, all over again.
strange results / performance issues (Score:1)
When NT 3.51 first came out, I was a big proponent (vs Novell), due primarily to costs. As an educational institution, we couldn't afford Novell, it was that simple. However, over the years, I have learned that NT just can't handle it. As soon as you throw 30 workstations at an NT Server, it starts to grind to a halt.
My real world tests show that NT is indeed faster than Linux at first, but soon starts to bog down to a point that the only real option is to start adding more servers.
This is what Microsloth doesn't like you to know about. Only when it's too late and you are forced to buy more servers and clients, are you awakened to the TRUTH.
One last comment.
Nobody has mentioned cost analysis and ROI in any of these benchmark studies. For an enterprise/institution, what is the total cost of ownership breakdown between Linux and NT?
I'd be willing to bet that if/when corporate america figures out that they could save tons of dough and actually increase the usage of their servers, Microsloth will be on it's ass.
Just my
Different people, different ideals... (Score:1)
This seems to be a variant on the wishful thinking "ignore them and all they say and the problem will go away". It does not have to be like you describe. Allow me a simple thought experiment to demonstrate my point:
Imagine two systems. System (A) is a low end server like you describe, say a Pentium II 200 with 64 mb ram and a 10 gig drive. System (B) is one of the systems they used in this test, or any given quad processor xeon with 5 drives and 4 ethernet cards.
Strip these system of any OS differences. Which one has the better theoretical performance? Even counting hardware designed for certain OS features, system (B) is the clear winner on hardware alone.
Imagine that (A) running linux outperforms (B) running NT. What an amazing feat that would be, but just imagine that linux is that good and NT is that bad. Now put linux on system (B). You see where this is going. Even allowing for M$ manipulated "independant" testing agencies to tweak out the performance from linux and tweak in extra performance for NT, there shouldn't be much of a contest. Linux must absolutely shine when it is given the hardware to do so.
My point now is that linux has been dealt a credibility blow and the original post in this thread is spot on. Linux must have beefier SMP support and better RAID support as well. And these items must be available "out of the box", even if only in certain specialized distributions.
Let's take this as a challenge and run with it.
Dan
Mindcraft's post to comp.infosystems.servers.unix (Score:5)
(If this was posted earlier, I didn't see it...)
Can anybody here respond to this?
Hi Everybody,
We're considering using Linux + Apache as a web server. The hardware
is a 4-processor 400 MHz (Xeon) server with 1GB of ram, a RAID controller,
and six disks. We have Redhat 5.2 installed and compiled an SMP version
of the 2.2.2 Linux kernel. For the web server we used the latest 2.0.3
version of Apache.
The scenario: we're bangin' on this web server with a bunch of clients
to try and get a handle on its capacity. Simple static HTML requests,
no heavy CGI yet. My Apache server is tuned up, MaxClients is 460.
I recompiled with HARD_SERVER_LIMIT set to 500. Limit on number of
processes is 512, imit on file descriptors is 1024.
The problem: the server performs well, delivering in excess of 1300
HTTP GET requests per second. But then performance drops WAAAY
off, like down to 70 connections per second. We're not swapping,
the network isn't saturated (4 x 100Mbit nets), disks are hardly used,
but the system is just crawling. If it were saturated then performance
should level off, not drop like this. Neither vmstat nor top show
anything unusual. No error messages in the web server. Its puzzling.
Any ideas? Any tips, suggestions, or pointers would be appreciated.
Thanks!
Mindcraft's post to comp.infosystems.servers.unix (Score:1)
(i'm not familiar with the latest releases of samba and apache, so don't sue me on this..)
Linux is faster than NT... here's the proof (Score:1)
only 970 MB RAM = misconfiguration (Score:1)
APACHE SUX less than you (Score:1)
So, you really mean that you should use pthreads. I could see that. But pthreads aren't nearly so platform-independant as fork() is.
Based on your post ... in fact, based on your subject, I'd say that every one of the Apache Group's programmers are better programmers than you.
Let me count the ways:
Possible deficiencies (Score:2)
Possible explanations? (Score:1)
One thing that's bothered me through all hype about Linux sucking, these sort of "studies," etc: why are they always run by people who have no clue about the things they want to portray that they are experts on? Sure there isn't much to NT, click some Next buttons through wizards, and voila. So they apply that same sort of mentality to Linux, either taking a bare RedHat (or other distribution), or minimal customization. (The recompiling kernel causes you to muck up the entire system beyond recognition bit, my guess is they have no clue about bootable floppies, configuring LILO to have two kernel images for fallback, etc).
What about the 960MB memory thing? Just a matter of telling LILO append="mem=1024M" ? I know it freaked when I put in 96MB the first time, only seeing 64MB.
As others have said, the posts the the newsgroup contained some major flaws, not enough details, etc. That certainly would turn off many potential replies.
Microsoft sponsoring them? Wouldn't their credibilty be higher if sponsors were NOT the manufacturers of the products they are testing? To me that's a major problem. For respect, a study should be balanced and unbiased.
In conclusion, they are lunatics. Plain, simple.
Interesting thing, that... (Score:1)
Ah what a world we live in.
Erm.. (Score:3)
I have average close to 60-70 Apache threads running as a regular load on Pentium-120 with 64megs of ram without any problems. Most of those are database-generated, rather than plain file GETs. Someone has been either drinking or got paid some dough..
Performance skewed by a KERNEL BUG fixed in 2.2.5! (Score:1)
Memory allocation: boot.ini vs lilo.conf (Score:1)
Maybe it's just me, but the fact that they went to the trouble of editing the boot.ini but not the lilo.conf is suspicious. Is mem=1024M really that hard? I'm quite certain the feature is documented.
only 970 MB RAM = misconfiguration (Score:2)
A leopard's spots... (Score:1)
> scoffs at this document is because of the
> source.
True, but I notice you qualified that observation with "seasoned." Alas, these same seasoned systems folks aren't making many purchasing decisions and are kept in the back room where their views on reality don't embarass the suits. (speaking from experience here...)
I've encountered too many people mentally conditioned by Pravda who will discount any all other studies - no matter how technically solid - in favour of the ones - no matter how technically soft - that support their prejudices. (Sadly, MS apparatchiks are not the only ones guilty of this).
The entire SITE is suspect (Score:1)
I mean wow! NT/Compaq blows away a UE450 worse than it blows away Linux?!? I guess I should stop wasting my time with this old hat Unix junk and get with the program and join the winning team! You can be sure that Real Soon Now NT will be far more reliable and scalable than any Unix system and I will be straight out of a job if I don't embrace the New Technology and join the marching ranks of brave new world techies towards progress and bliss.
Now
wow.. (Score:2)
Time to wake up (Score:1)
Not a flame... (Score:1)
----
Say what? (Score:3)
Really; the Winbox had most of its services shut off, while the Linbox was running SMB, NFS, etc. My guess is that they were probably hitting those other services while they were taking the numbers.
Besides, this runs contrary to every other (non-MS paid-for) study I've seen. Mayhaps someone should do some independent verification. Be sure check if the Windows numbers were a "demo".
Hey, they lied to Justice; why wouldn't they lie to us?
----
Performance skewed by a KERNEL BUG fixed in 2.2.5! (Score:1)
SMB tests (Score:1)
Here is an SMB test [zdnet.com] on a large machine.
In general there are some areas where Linux lags NT. IIS, for example, outperforms Linux on static page displays because it has a page cache and does not have to always cross the user-kernel boundary to fetch the page (system calls DO have a cost, even though 2.2 sped up the open() call considerably with the dcache). And it may very well be that a well-tuned ultra-high-end NT machine will beat a well-tuned Linux machine at file serving, given that NT will support the full 4gb of memory while Linux only supports 2gb of memory. But this "test" was not such a test -- it compared a well-tuned NT machine against a totally untuned Linux machine.
And of course I'll point out that on tests on more modest hardware, like this [zdnet.com], Linux blows away NT handily. To be fair, that Smart Reseller test was just as biased in its own way as this joke test we're talking about... Smart Reseller chose a machine that's too small for NT to comfortably stretch its legs, albeit that the machine they chose is rather typical of small office web servers.
-Eric
Apache 1.3.4 vs Apache 2.0.3 (Score:1)
I know that I had no incentive to go dig this "will @ whistlingfish" out of his hole. I couldn't make heads nor tails of that posting when it was new, and there's too many other postings to reply to where people actually give useful information about their problem for me to bother with something like that.
-- Eric
Spring comdex (Score:1)
-- Eric
I was wrong, sort of... (Score:1)
2) The 2.2 kernel defaults to 4096 file handles, as vs. the 1024 default for the 2.0 kernel, so it's unlikely that he was running out of file handles.
Still, obviously he did something wrong, because Apache usually does not collapse like that. It simply degrades gracefully, assuming max_clients is set so that you don't thrash the machine to death (and his message says he wasn't thrashing). See what happened when the Slashdot Effect hit the Linux Counter... once he brought down his max, it simply got slow but kept chugging out the requests. Puzzling. Without access to the server logs and httpd.conf files, it's unlikely we'll ever know what or how he did it, though.
00 Eric
Hard to believe. (Score:2)
And: These people are *LIARS*. They say the posted messages asking for help on the Linux newsgroups. There are *NO* messages from the mindcraft domain anywhere on the Linux newsgroups. So I did DejaNews searches of "performance tuning", "performance tune", "kernel tuning", "kernel 2.2 tuning", between January 1 1999 and today, and examined the results to see if there were any messages that may have been by MindCraft researchers (i.e., that were referring to performance problems with a large-memory machine). There were *NONE*. Zero. Zilch. Which means that if they did ask any performance tuning questions, they did not use those words in the message.
Anyhow: VA Research already loaned a quad-processor Xeon machine to PC-Week and it blew away NT 4.0 in their SAMBA benchmarks. VA Research's quad-processor Xeon machine is the same machine that we sell, and the same machine that Penguin Computing sells (we all get them from Intel, and then dress them slightly differently once we get them, e.g. VA Research uses a Mylex RAID card while we and Penguin use ICP-Vortex RAID cards). So we already have the benchmark that shows that their SAMBA benchmark is full of ****. But that's not going to matter to pointy-haired bosses because they recognize only those reports and studies that say what they want to hear.
Am I steamed? You bet! I *HATE* liars!
-- Eric
The problem was... (Score:2)
-- Eric
Will @ Whistlingfish (Score:4)
If there had been questions about general tuning of such a large system, that would have solved the problem because someone would have remembered about file_max. But one cryptic query that didn't give enough information to get help does not an honest effort make.
Anyhow: I guess I have to post a partial retraction. They did post a *SINGLE* query to the net.
-- Eric
Mindcraft did similar hatchet job on Novell! (Score:1)
http://www.novell.com/adv antage/nw5/nw5-mindcraftcheck.html [novell.com]
Mindcraft admits that Microsoft commissioned the original report.
http://www.mind craft.com/whitepapers/rebuttal-summary-nts4nw5fil
Do we see a trend here?
Rebuttal on LWN (Score:1)
300MB File Cache in NT? (Score:1)
How did Mindcraft get around this, or was it actually documented in their report? Heh, I guess I should probably read the entire thing first huh? Naa!
Also, after some research on why Apache performance dropped considerably at one point, it does look like they hit the 1024 file descriptor limit. Alan Cox has a patch for 2.2.x that brings this up to 10,000+ and theoretically millions. Check the recent Kernel Traffic mailing list for details. Did they do any research at all for this report, I mean come on!
Good point. (Score:1)
Good point. (Score:1)
Fair Comparision (Score:1)
ie a Windows NT 4.0 server running IIS setup by microsoft employees against a Linux 2.2.2 server running Apache setup by Redhat? people. But we all know that MS would never allow such an unbias test to occur under their rule of FUD.
Network card?!? (Score:1)
I've heard that NT isn't anywhere near ready for Gigabit anything. Reading in NetworkWorld about some network show, they mentioned how they talked with several gigabit ethernet vendors who where very unimpressed with WinNT's throughput - and commented that NT had to be specially modified to sustain 400KBytes/sec.
I haven't ever tryed NT with gigabit ethernet, but it doesn't surprise me...
Hmmm. Testing (Score:1)
960meg 4 CPU's. Hmm. I want to know why any web server is going to need (not to say that you don't want it) that much ram, or even 4 processors. If we all set back and think about if you really need that much for your server to perform then you probly are running a pretty crappy os. Granted NT has it's place and GNU/Linux (no flames please) has it's. Personally I would like to see a more down to earth server tested. How many web sites out there run 4 CPU 1gig servers. I say take a poll (running x86 based CPU's) of 50 of the largest site, 50 of the middle sites, 50 of the smallest sites and get what they run on (i.e. HW). They come up with an average take that average and put it together then install your os's of choice with experts from all sides to configure the os's. Then run the tests again. You'll probly see that you end up with Linux winning. From personal experience NT is no easier to configure than Linux maybe get installed initially but not configured. An untrained monkey could install it, but you better have time and experience to make it work more that 2 weeks for you with out a crash. When your lucky you get a whole month.
Mithalas
MCSE
Actually - 1Gb vs 1Gb. (Score:1)
Please show me on an official test NON BIAS that with that entry nt will only use 1gb. Plus you should checkout what they did with NW5 and the interesting ways of configuring hardware.
Mithalas
MCSE
Benchmark (Score:1)
Is this topic really worth to write dozens of postings about? (Guess why we all are posting here...) Well, even if NT was twice faster (which it is not), would anyone matter?
linux just started to gain sympathy even in business - some (who said they would not risk using a hacker system some months ago) already say they can't afford proprietary systems like M$ NT or else - now.
Another thing is that it's nearly impossible for a bunch of well-paid M$-coders to improve a server app like the thousands of (free) linux coders do.
What might be wrong (Score:1)
i think mindcraft is part of ms, all they do is advocate ms products
Biased? (Score:2)
Let's see here... they were using a ZD program to test SMB performance? To a certain degree NT would have an edge, given MS made the SMB protocol and since it is a Ziff-Davis program. The headline doesn't even specify "fileserver". I'm sure Apache would shread NT as a netserver, performance wise and in up-time.
tests by linux newbies (Score:2)
I supect that the people conducting the test were not proficient linux users/administrators. The Linux installation followed defalut settings except for Kernal automounting. The fact that
"NFS file system support = yes "
makes me wonder how the drives were partitioned (RAID configuration as well)
The test also mentioned
"The Linux kernel limited itself to use only 960 MB of RAM"
Which is a subject dicussed here last week
The NT installation was not default, the Registry was directly changed.
"Server set to maximize throughput for file sharing"
"Set registry entries: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servi
\NDIS\Parameters\ProcessorAffinityMask=0
Tcpip\Parameters\Tcpwindowsize = 65535"
"Used the affinity tool
I do not consider the test valid.
The NT installation was tuned (if even slightly)
The Linux installation was not.
We need o third party with a though understanding of both OS's to administer an accurate test.
Other differences... (Score:2)
Not regarding the web server, but I noticed that they set NT's pagefile to 1GB and didn't mention Linux's swap configuration at all.
Taking everything that they did to misconfigure Linux it doesn't surprise me that it doesn't perform spectacularly, it's like turning off L1 and L2 cache, turning off shadow ram, and setting 4 wait states to a PentiumPro 200 - it turns into a 386!
Stuff like this ought to let people see what Microsoft's game plan really is (assuming they even have one, after reading the Halloween docs (: )
Memory (Score:2)
If they had availed themselves of a Linux expert (or gotten Linux pre-installed by a good VAR), they would have tuned the kernel to at least use 2G of RAM. All you have to do is change __PAGE_OFFSET to 0x80000000.
Single CPU kernel? (Score:2)
NT 4.0 is 2.5 times faster than Linux as a File Server and 3.7 times faster as a Web Server
with ZDNet's findings on the subject of the same benchmarks of Linux vs NT benches.
The same thing could have happened if smp were "Accidentally" left out of the linux machine's kernel.
I doubt that any amount of "tuning" would generate this type of difference. With apache, perhaps realtime ip resolution would do it, but I don't see how that would figure in with Samba.
Some notes on Mindcraft test ... (Score:4)
Set OPTIM = "-04 -m486" before compiling
on 4 x 400 MHz Pentium II Xeon
Samba 2.0.1 Configuration
wide links = no
That creates a bottleneck in Samba performance, see here [linuxtoday.com]
the following processes were running ... (kswapd), /sbin/kerneld,syslogd,
not sure if that means something, but why they run kerneld with 2.2 kernel ?
On NT side:
Tcpip\Parameters\Tcpwindowsize = 65535
that makes huge boost on network performance, but only on local network where packets don't get lost
Set Logging - "Next Log Time Period" = "When file size reaches 100 MB"
Logs on the F: drive (RAID) along with the WebBench data files So basically server does much less logging than Apache - and since it's many small requests, and since Apache writes logs on a non-RAID disk all together it'll be a big bottleneck
Anyone noted anything else wrong with this benchmark ?
From all my experience it looks like pure crap
P.S. Why they needed NFS ? inetd ?
only 970 MB RAM = misconfiguration (Score:2)
People complain about tests like this and DH Brown, but really only somewhat out-of-box solutions should be tested.
We work with you to define the goals (Score:2)
As their main web page states, they define the goals before they test. The only goal was to say NT runs faster than Linux. I've never heard of this company before. I now know why.
Some notes on Mindcraft test ... (Score:3)
wide links=no has been explained on other threads. It certainly slowed down performance by an unreal amount. This is a paranoia security measure that apparently some admins would use. I don't know that it's fair to assume that they put this in solely to skew the results.
As for kerneld, inetd, NFS, etc.: all right it's unnecessary, but will use under 1MB of RAM and under 0.1% CPU most likely. I don't see this as an issue.
My best guesses for the apalling results are something like this:
- the wide links=no thing. NT doesn't have to worry about symlinks. I think this is unnecessary in pretty darn well every case. This could either be intentional malicious intent for the pro-NT side, or inexperience/a mistake. Either way, it would be nice to show some number with this turned on.
- pure speculation here, but they may have set up Apache to do real-time hostname lookups. This is an absolute no-no for any serious server. Again, possibly inexperience or a mistake.
- the >512MB RAM Linux bug. I've heard horror stories, and I've read people with no problem at all. Also, I believe this was a problem with PIIs only, and they were using Xeons in this report. Who knows.
Anyway it appears to be a bad combination of very silly yet somewhat understandable (for newbies) software misconfigurations, and some bad choices in hardware. Which brings me to another point: quad Xeon for a server? My K6-166 could handle a few thousand hits a second I'm fairly certain. Adding more processors will only slow things down when you're dealing with file serving.
It's hard to say whether or not they did this intentionally. It's fairly obvious that they didn't know exactly what was going on with Linux. I'd say Microsoft has a list of hardware that they know works well, and when these people asked for sponsorship, some Microsoft people said "OK here's some hardware that we know works. BTW doing this and this and this might help out your performance". Not to say that Microsoft went out of its way to hurt Linux, but they probably know what works best on their own systems.
Erm.. (Score:3)
only 970 MB RAM = misconfiguration (Score:2)
kernel patches my ass..
MS Sponsered this (Score:2)
If you look at the other whitepapers that this company has done it is very evident that they are highly biased towards NT.
Just look at the SMP Ultra Sparc machine getting
beat 4x over by some NT PC.
Over at linuxtoday.com one of the Samba team members gives other information
emad
The sanest way to an objective test (Score:2)
The only way to work this out (apparently) is if representatives of "both sides" came together and defined parameters of a test. They should specify things such as what hardware the server should run, what services should be running (and more stuff that you sysadmins know about for a living). The group should also lay down specifications on how this should be measured. Then, these specifications should be publically reviewed, and revised if necessary.
Then there should be a test for extremists on both sides to tune the OS of their choice to achieve the best possible performance on the specified platform.
If a vendor (say, Dell, Compaq, IBM, or one of the other companies that now presumably deliver both OS's) could put up a number of similar boxes and have a "tweaking contest" on a software convention or whatever, that would be even better. However, I cannot believe for a second that Microsoft would have the balls to allow such a shoot-out...
Linux is not Mindcraft's only victim (Score:2)
Chilli
My experience is MUCH different... (Score:2)
This machine serves over 100 clients, and it functions as a primary domain controller running Samba 2.0.3. It has worked phenomenally for the all NT network my client has, and it also serves Email, IMAP, POP3, and is a web server for all the users here. It also does a myriad of other tasks, and the load never even hits 1. And this thing is a less powerful machine than they tested, but it can serve over 100 clients with ease.
I don't know where they cooked up the figures they have, but this server gets plenty of use [alliedtours.com], and it's never buckled or given me any problems setting it up.
By the way, does anyone know where I could go to find out how to increase the maximum number of files, and/or to further tune this machine, because I've had a couple of small problems with running out of file handles for the whole system. Anyone have any suggestions for a site I could go to?
Sponsors (Score:4)
Mindcraft Certification
Mindcraft, Inc. conducted the performance tests described in this report between March 10 and March 13, 1999. Microsoft Corporation sponsored the testing reported herein.
Looks like you can buy anything you want with enough money. It doesn't make it a true indication of a real-world situation.
I think that there's enough evidence to the contrary already out there, and this will only serve to discredit Mindcraft.
Also, it seems they crippled the Samba Server (Score:4)
From Andrew Trigell (original Author of Samba):
They set "widelinks = no" now I wonder why they did that
In case you haven't guessed, that will lower the performance enormously. It adds 3 chdir() calls and 3 getwd() calls to every filename lookup. That will especially hurt on a SMP system.
Responding Calmly and with Dignity (Score:2)
With the amount of equipment involved, I believe it would take VA Research or a company of that ilk to try a similar test.
(Could we do a test on a less expensive set of equipment?)
If the results of the original report are not reproducible, then what they did is bad science. I think that trying to reproduce the results, but with people who know how to optimize the Linux set-up (and be fair and optimize the Windows set-up too) would do much more for how Linux is preceived than by us doing a lot of name calling and questioning the motives of Mindcraft.
The point is, if everybody else who does the test gets completely different results, that will be all we ever need to say.
Let's respond in a way befitting the wonderful operating system that Linux is.
Mindcraft and Novell (from the linuxtoday.com site (Score:2)
also provides a link to an article about a similar incident Novell had with Mindcraft.
http://linuxtoday.com/stories/4937.html
http://www.novell.com/advantage/nw5/nw5-mindcra
The one on the novell website is epecially informative. It is the exact same situation in which results published by Ziff-Davis show NT at a disadvantage, but when Mindcraft does the test, NT comes out ahead!
Look at the OS configurations (Score:2)
I'd beg to differ here - passing mem=000M via lilo will cause linux to address that memory - provided the system has that much addressible space.
--
NIC tuning (Score:2)
I also noticed that the NT box was tuned by someone who is obviously very well versed in NT internals. The Linux box appears to be an out-of-the-box install, with the proper settings turned on. They didn't even include the
One more thing, notice the cache TTL and max number of open files for IIS. Both are set to very high settings, which will ensure that files will not be paged out during the runs.
Read the Apache documentation (Score:2)
------------------------------
Apache Performance Notes
Author: Dean Gaudet
Introduction
Apache is a general webserver, which is designed to be correct first, and fast second. Even so, it's performance is quite satisfactory. Most sites have less than 10Mbits of outgoing bandwidth, which Apache can fill using only a low end Pentium-based webserver. In practice sites with more bandwidth require more than one machine to fill the bandwidth due to other constraints (such as CGI or database transaction overhead). For these reasons the development focus has been mostly on correctness and configurability.
Unfortunately many folks overlook these facts and cite raw performance numbers as if they are some indication of the quality of a web server product. There is a bare minimum performance that is acceptable, beyond that extra speed only caters to a much smaller segment of the market. But in order to avoid this hurdle to the acceptance of Apache in some markets, effort was put into Apache 1.3 to bring performance up to a point where the difference with other high-end webservers is minimal.
We should learn from this (Score:5)
If Linux is going to be treated as a serious operating system by the majority of the IT community, it's going to have to step up to the plate and demonstrate scalability and performance which does rival NT server in this area. Most of our knowledge about Linux-vs-NT performance is somewhat anecdotal -- we haven't really "put our money where our mouth is" and shown objectively that Linux can outperform NT in these areas.
Rather than dismissing this study as FUD, I think we could learn a few valuable lessons from this study. We should seek to understand why the benchmark results weren't as great as we would have liked. We should fix any obvious bugs or misfeatures in Samba, Apache, and the Linux kernel which stood in the way of higher performance. And we should stive to improve the entire system to make it be a true NT rival.
We have a lot going for us. First of all, we can innovate at a much more rapid pace than Microsoft -- so hopefully within just a few short months (and I'm being pessimistic!) we could demonstrate a high-performance Linux file and Web server which kicks NT's butt all over the place.
Nobody said building a high-performance, scalable Internet server operating system was easy. Let's get to it!
Matt Welsh, mdw@metalab.unc.edu
results have little practical significance (Score:2)
Pretty much all high-traffic sites have dynamic content. They are not limited by the kind of web server performance these systems measure, but by the technology you use for generating the dynamic content (Perl, CGI, Servlets, databases, etc.).
Throughput of more than a few megabits per second is also pretty academic, at least for Internet sites and most intranet sites, simply because the network can't handle more than that.
Furthermore, Microsoft has been foremost in doing funny things with their TCP/IP implementation, both on their servers and on their clients, to look better on these kinds of benchmarks. If you look at the TCP/IP specs, it's actually impossible to achieve the kinds of hit rates they claim with a compliant implementation. Microsoft also seems to have done other things with timing and sequence in the past that made their systems look good and other systems trying to interoperate with them look bad (accident? you tell me...). So, even if NT performs better with 95/98 clients, that doesn't necessarily imply that NT is a more efficient system.
Another problem with their study is that it makes little sense to buy a four processor Xeon machine to run web sites with Linux. Four separate Linux machines are going to be more robust, easier to install, easier to maintain, perform better, and cost less. Of course, with Windows NT, because of the hassles of administering machines and because of the cost of the various software licenses involved, people may end up having to buy expensive, high-end SMP machines. I view that as a strike against NT.
They also don't seem to have tested systems where multiple, different server processes need to run on the same machine (web server, database, etc.). NT seems to perform poorly in those situations.
I can't comment as much about the Samba results. What I do know is that the Microsoft SMB servers we use seem to perform very poorly compared to the Samba servers on Solaris in practice. These are both professionally installed and maintained systems on high end hardware with hundreds of clients.
Altogether, their study strikes me as biased and meaningless. To me, NT isn't even in the running for building large, high-performance web services. For the performance characteristics and functionality that matter on real web servers, a Linux or BSD server farm is a cost effective way to go.
Having now read it thoughtfully (Score:2)
Linux does not appear to have done well. How does this test translate into a real world situation? Isn't Slashdot running on a lesser machine than the test server? And cranking nicely with perl and Apache doing the dirtywork?
Someone has already mentioned the 960 MB self imposed Linux RAM use limit... Looks like a typo more than anything else.
Pretty graphs that an MBA would appreciate looking at.
The testbed was purely Win95 and Win98 machines running Microsoft TCP/IP - how this translates into 'extend and embrace' is interesting.
The one major anti-Linux thing said was that documentation and support were not forthcoming for the kernel and Apache, but the Samba docs were decent. Is this because Samba is a 'clone' of a Microsoft product?
Just how intimidating is the lack of formal documentation, for an enterprise level web server? After all, the people responsible for handling such an animal would surely have readily available access to the 'routine' expertise, and quirks and oddities are not something even Microsoft documents eagerly.
Ah well.. Back to time off.
The cost of such a system (Score:2)
This is just plain stupid.
Articles from our dear friends at Mindcraft. (Score:4)
The net posts asking for help that are mentioned in the white paper appear to have been most likely made under the pseudonym:
will@whistlingfish.net
Use DejaNews.
No-one seems to have done that and talked about it. I did; h ere's the relevant link [dejanews.com] that lists all the messages from this guy on Usenet. Take a look at them and post what you think about them. It seems to me he hit a strange, obscure bug in GNU, Linux, or Apache, and it might have something to do with network adapter or SCSI adapter problems.
Large memory issues... (Score:2)
These tricks probably need to be documented somewhere.
As Linux becomes used for bigger jobs in buisness a high quality Kernal Tuning HOWTO would be good. Even if it were a published book.
4 Network cards... (Score:4)
It seems to me that Linux with one network card doing only 2.5 times under NT with 4 network cards sounds about right. Give Linux 4 network cards and you get performace that easily blows NT away.
To use multiply NICs you have to use the network driver as a module I believe, did they bother to do that? I can't imagine Red Hat's installer "How many network cards do you have, but then again maybe it does, I'm a Slackware kinda person...
strange results / performance issues (Score:5)
Linux definitely has some hardware/kernel combinations that would seem OK by design on paper, but exhibit peculiar behavior in practice, especially with SMP. I wouldn't rule out the possibility of the testers (or financial backers) hand-picking kernels/hardware configurations that could affect results while seeming perfectly viable to the layman.
It seems very likely to me that if Microsoft did not outwardly donate the hardware to the testing company, they at least made suggestions on its configuration. The open nature of linux development and bug disclosure could easily be used by companies wishing to stage biased demonstrations; Microsoft almost certainly does a thorough job tracking linux kernel development and bug reports.
-- Scott
Lies by ommission (Score:2)
In article ,
will@whistlingfish.net wrote:
> We're considering using Linux + Apache as a web server.
Excellent choice.
> The hardware is a 4-processor 400 MHz (Xeon) server with 1GB of ram, a RAID
> controller, and six disks. We have Redhat 5.2 installed and compiled an SMP
> version of the 2.2.2 Linux kernel.
I hope it's not too late to change your hardware, because your box is a
complete waste of money. SMP gives you *nothing* with regards to web
serving, and it makes your OS flaky as all hell. The RAM is nice, but the
processor speed is overkill and having 4 of them is just plain wasteful. The
network card would saturate completely before you even came remotely close to
using up the resources of even a single P2 200Mhz.
> For the web server we used the latest 2.0.3 version of Apache.
Stick with what works. I'd use 1.3.4, as it's generally considered more
'stable'. You don't *always* want to be "bleeding edge".
> The scenario: we're bangin' on this web server with a bunch of clients
> to try and get a handle on its capacity. Simple static HTML requests,
> no heavy CGI yet.
Another suggestion: mod_php3. I guarantee that if you ever see large
amounts of traffic, CGI will rapidly become your worst nightmare. There are
a variety of _internal_ Apache modules that give you everthing CGI can do,
but faster better and more efficiently. Keep in mind that CGI requires you
to fork() another process to handle each web request, which can very quickly
run you up against the process limit on a heavily loaded machine. PHP3 is a
PERL-like, C-like programming language that's relatively lightweight. You
can download the sources from http://www.php.net/, where they also provide
instructions on how to build it into Apache.
> The problem: the server performs well, delivering in excess of 1300
> HTTP GET requests per second. But then performance just drops WAAAY
> off, like down to 70 connections per second. We're not swapping,
> the network isn't saturated (4 x 100Mbit nets), disks are hardly used,
> but the system is just crawling. Neither vmstat nor top show anything
> unusual. No error messages in the web server. Its puzzling.
Try various flags to netstat, see what they say. If you could post the
details of several different commands that would be helpful in diagnosing the
problem.
> Any ideas? Any tips, suggestions, or pointers would be appreciated.
> Thanks!
What type of network load do you expect to see on your box in the long run?
What type of applications does it need to run (other than Apache and its
modules)? I know it's blasphemy in this group, but if you're just doing "raw"
webserving (no database interaction) you'd see *much* better performance with
some variant of BSD (for example, FreeBSD from http://www.freebsd.org). If
you're more into running a K-rAd k00l website with lots of doo-dads and gizmos
(and don't care about performance under heavy load), then Linux is your best
bet.
-Bill Clark
NIC tweaks (Score:2)
Used the affinity tool to bind one NIC to each CPU
(ftp://ftp.microsof t.com/bussys/winnt/winnt-public/tools/affinity/) [microsoft.com]
If this does what I THINK it does it would explain a lot.
Maybe a dumb question, but why? (Score:2)
Sponsors (Score:2)
Their corporate methodology has been clear since their beginning. They assume the competition's strengths into their products, and then they state that their product is then the superior product. Their tactics have always been obvious and simplistic, and their tactics have also been very effective, until now.
They face a quandary with Linux: how does a business compete with a product that has no specific vendor to attack? How do they compete with a product that is communistic in nature? It is more than their product's competitor; it is becoming their corporate nemesis. They cannot overwhelm something that has no boundaries, that is developed without regard for specific profit, and that has their own corporate policy as it's core design: 'Be the Borg.' -- take the best of your competition's strategies and products, and make it part of your own structure. Anyone who has followed the history of the computer's evolution will remember that Microsoft started as a forced progression of business policy into non-mainframe OS software development in the late 70's. Anyone who remembers the early days of the PC (or then known as microcomputer) will remember that the thought of 'licensed' software that was property of only the company that created it was laughed at initially as unworkable or unsustainable, but Microsoft succeeded in making that policy work. Microsoft grew rich on that one idea, and it is the reason that Microsoft has been able to achieve dominance in the OS arena.
BUT, Linux has changed something important, and Linus probably didn't realize how important what he did was at that time, or what part of it was important. Linux by itself would never have had the possibility of competing against any dominant OS. It would have been another hacked OS that would never have left the collegiate world. It IS the Open-source licensing structure that has added the needed element to the software that has made it into an upheaval in software design methodologies. It's the Open-source piece that has turned a lot of heads due to its impact on the software industry. This is due to the fact that Linux (and through Linux, the Open-source licensing structure) is an evolutionary change in software design. Linux started as a free, cooperatively evolving OS that has returned the unstructured human element to the process of business software development. Sadly, this human element has always existed in the academic community, but died in the business community with the domination of Microsoft as the dominant business model in the software industry. The corporate structure that has grownup around Linux is just a natural reaction of capitalism to anything that has the ability to produce revenue, but Linux remains a communistic product by it's licensing structure. And that's a good thing for it; it's the only way it will be able to remain a strong and vibrant competitor of Microsoft for the long term.
So, in the end, the analysis of Linux vs. Microsoft is a null argument. Microsoft cannot compete with a product that is not a product, but a movement. Linux is fundamentally restructuring corporate policy towards software development. I just hope Linux's impact will survive the greed that will try to control it's nature while the Open-source movement grows up.
And I hope the 'Borg' in Microsoft can change it's ways so that it can allow another dominant player into the game without it having to feel the need to annihilate it.
-- The violin is playing in the background for those who are listening to it too.
I didn't see where they counted.... (Score:2)
'nuff said.
Mindcraft's post to.. (anyone notice this??) (Score:2)
before it falls down.. then in the report, that shrinks somehow to 1000 req/sec??
Kinda makes (me at least) ya go Hmmm...
Anyway... I'm looking forward to results on the same kind of hardware, peaked by people who know how. Hope they get that re-test project going!
Are these the only OSs available? (Score:2)
NT and unix are too disparate for a balanced comparison, high-end Solaris vs Linux comparisons would offer a clearer perspective in the real world.
2.2.2 has a known TCP deficiency w.r.t. NT (Score:2)
The Linux developers care about this issue, but not so many of them have NT running at home...
only 970 MB RAM = misconfiguration (Score:2)
Look at the OS configurations (Score:3)
For one, NT used 1Gb of ram will Linux used only 960MB. Surely they could have passed the parameter MEM=1024M to the kernel
Additionally they tuned tcpwindowsize under NT to 65536, and adjusted buffers on the network card to 200 (from 32).
They made no TCP/IP stack adjustments OR adjustments to the netcards under linux.
Just look at the sections explaining the myriad of things they did to "tune" NT. Then look at linux. Enable NFS. The following daemons were run. blah blah. Didn't bother to work on anything.
James
Apache Performance Issues (Score:2)
exec of the script environment seems much better. Try running a good hunky perl program on each request and see what you get.
The other thing is that although I am a big fan of linux on powerful machines, its greatest charm is that you can run it on a 486 with 16 megs of ram, and have a relatively well behaved web server that can stay up for 60+ days with no user intervention.
stop arguing credibility - this is not a courtroom (Score:2)
going to convince me is an actual reproduction of the test,
second best is technical information, which is what
some posters are providing.
Your own contribution ("I've never read a bigger pack of lies")
doesn't tell me anything useful and only takes away credibility
from the criticism posted by others. You are damaging this forum.
Argue the facts, not the circumstance that the report serves Microsoft's interests.
IIS has the real world proof anyway... (Score:3)
Biased? (Score:2)
Linux Up Close: Time To Switch [zdnet.com]
See "RELATED LINKS": The Best Windows File Server: Links & Linux Is The Web Server's Choice
Say what? (Score:2)
What I found interesting was that they apparently didn't make a separate swap partition for the linux box (they said 1 OS partition and 1 data partition)... hm...
Actually - 1Gb vs 1Gb. (Score:2)
See what I mean? It used 1024Mb.
Also, it seems they crippled the Samba Server (Score:2)
They set "widelinks = no" now I wonder why they did that
My guess would be so that you would have a secure system, which is what 99% of admins not trying to rig benchmark results would arguably prefer.
widelinks = yes gets you hit on the nose with a soggy newspaper, if you're an admin.
No surprise (good job ryan) (Score:2)
I must say I find it amazing how quick a lot of our fellow slashdotr's are to judge. I work on a production team for a major high tech company. A lot of our Database servers run NT, Anything that is doing security is running some flavor of Unix or Linux.
Just today my mentor(Yes im an Intern, and 16 at that) was telling me we may have to add 3 BSDI servers to our network. I was floored! We already have about 5 diffrent OS's on our network(thats counting diffrent flovors/versions of unix and Linux)
When i asked her why the answer was pretty simple. The program out grand high mucky mucks had dessignated for Ecommerce was optimized for use with BSDI. Now i have to learn yet another OS.
I enjoy hte learning, its why im here, but I have learned one thing. SQL doesnt work well Linux boxes. NT4 has a few security holes that NT5 doesnt. You can fix those with SP4 but then our back end doesnt work.
Ive learned how to set up security on a Unix box in such away that it will make the NT boxes more secure.
I have also learned that most OS are really good at a specific task, and that Zealots will use that to the best of there advantage.
Every OS i have used had some really positive points and some really good points. Linux is great, But i wouldnt want my grandmother using it. The MacOS is great when you want simplicity or are doing major graphics. Windows on the other hand is great if you want to do just day to day stuff. I would hate writting a term paper in Vi or Emacs. Heck i would even hate doing it in Pico! But Word works really well, and notepad can be the "quick and dirty" programmers best friend, especially with HTML stuff that you need NOW.
Everything has a positive, everything has a negative. Lets not get to angry at those who are willing to actually admit they agree with somthing we dont. If everyone was like that this whole world would be like Kosovo.
Linux SMP verses NT (Score:2)
Read This .. Strait from the mouth of MS (Score:2)
Another thing about Mindcraft's testing was
This is contrary to Statements made in the May MCP Magizine. I find that they both have their strengths and as a MCSE have seen NT systems that are properly configured perform their server duties well (with careful observation and maintenance). I have also seen Solaris/Linux/AIX all perform as well as or better in the same environments as NT.
Just 2 Cents from yet ANOTHER MCSE & LINUX User (Bet you don't see that everyday...)
Response at lwn.net (Score:4)