Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Quantifying "Bandwidth is the Limiter" 203

John Lazzaro writes "Found this linked from Dave Winers site, an analysis that puts numbers behind the oft quoted "Linux + Apache will saturate any reasonable Internet link given static pages" It basically assumes the mindcraft tests are accurate, and then tells you what it means. Most interesting is the comments about MSs online tech support, the hardware they use, the results of the benchmarks, and the fact that static content is practically irrelevant.
This discussion has been archived. No new comments can be posted.

Quantifying "Bandwidth is the Limiter"

Comments Filter:
  • by Anonymous Coward
    ... a sane analysis of these benchmark. Personnallly, I don't care much about Linux "losing" this benchmark. As far as I am concerned, Linux is good enough (performance-wise) for my application (serving 5-15 thousands hit of dynamic content a day).

    Another aspect that have not seen discussed is the economical aspect. Let's do some more math :

    Web server budget : 6000$ CDN

    NT solution :
    BackOffice : 3500 $ CDN
    Hardware : 2500 $ CDN

    Linux solution :
    OS, database engine, developpement tool : free
    Hardware : 6000$ CDN

    These are real world number: that's my budget for a Web server I am setting up. With the added budget on hardware, I can afford a RAID controller, a second CPU, twice the memory, a good enclosure (redondant PS/fan) and a better UPS. Guess wich solution my boss choosed ?

    So if my boss come to me with this benchmark in hand screaming for explanation, I won't hide under my desk. She speak $, so she'll have no problem understanding the issue.
  • by Anonymous Coward
    I work for an NT house and from experience i can say that NT can be installed in less than 2 hours.

    I have installed NT on a new system up to the point that office and pro-e work in one and a half hours. no less than 6 cdrom's are required. network install, what's that? what do you mean i don't need specialized drivers for each video card, ethernet card, sound card... but i like rebooting my machine each time i install something new.

    the kickstart feature of RedHat is a godsent for manufacturers. I would love to be able to boot a floppy and come back in 2 hours to a complete system.

    I guess that's why i convinced the owner to sell linux.

    /* plug */

    visit NTSI [ntsi.com] for high power ALPHA's running Linux.
  • by Anonymous Coward
    For those of you wondering about that benchmark c't translated the article into English (although, not the graph labels, but those shouldn't be too hard to figure out ;) and posted it on their website here. [heise.de]
  • Is this a troll or a serious comment? Apache runs one process for each connection. The Linux SMP kernel (or a SMP kernel for any OS) will balance the processes/threads accross all CPUs. The only possible apache could have been running on a single CPU is if there was a single incoming connection. If that were the case, NT would be using a single thread which could only be run on a single CPU.

    The reason why Linux simply does not scale to four processors is the TCP stack inside the kernel is single threaded. Sending TCP packets are balanced accross all four CPUs on NT but only done on one CPU in Linux. This has absolutly nothing to do with redhat unless one of the other distributions slipped in that secret "no_nagle" script to make your system 100 times faster.
  • by Anonymous Coward

    Basically, NT performs static content serving more efficiently and the Linux response is "yeah, but Linux is good enough"

    99% of the time, the Linux crowd will complain about Windows apps being "bloated" and using lots of diskspace and ram even that the average PC comes with more RAM and Diskspace than even office2000 can saturate.

    So, you have the Linux crowd arguing Linux is very efficient, but then real world tests show it isn't more efficient than NT at serving static content and the comment is "BUT BUT, that will saturate connections anyway!"


    Did you ever think that if NT can handle 300% more requests per second than Linux, that it
    doesn't waste as many server resources per each connection handled?

    Face it: Linux doesn't scale as good as NT. Stop trying to downplay it.


    Now, if you want to try it again with dynamic content, go ahead, but my best is that ISAPI/COM/ASP applications will SMOKE apache modules/php3/mod_perl.

    What happens when Linux/Apache loses out in a dynamic benchmark?


    Face the music people. The first step in recovery in admiting you have a problem.

  • by Anonymous Coward
    Well, this is the deal -- testing to destruuction. Look at the benchmarks in the first place -- they showed us where Linux failed. So we can now fix those areas.

    Ten years ago, V-rated car tires were just getting standard and people were talking about Z-ratings as being silly. I recall reading a really, really dumb editorial during the F1 season saying a number of things, but largely how setting up a tire to control an F1 car or a 230mph streamliner down the Mulsanne Straight (when it was still straight) was completely irrelevant to the development of high-efficiency tires to save our precious oil. Well, as time went on, those 190+mph tires led to completely different carcass construction, long chain carbon, fumed silica, better rubber, better dealing, better tread design, and so on -- these days a typical 113mph tire gets about produces about 60% less drag for the profile, grips about 3x better (or more), lasts in some cases more than 80,000 miles (remember, in 1988 40,000 miles was incredible), and so on.

    I can think of other examples from racing that have had direct applicability to production cars and more prosaic issues, like fuel savings: better aerodynamics, better cooling system design, better windage loss control, better oiling and oil control, better porting, multi-stage intake and exhaust maniforlds, tuning for various things (lower noise, more power), shorter flame paths, ceramic coatings, synthetic lubes and fluids, and so on. I am sure that you get the point.

    I would like to figure out how to get Linux working on 8 100Mb cards at once. Sooner than we expect we will have 133MHz 64 bit PCI buses, and we should be ready to use them. And figuring out how to do huge things always lets us do the small thinbgs more reliably and usually lets us figure out what the minimum amount of material (steel, rubber, magnesium, C code, cycles) we need to do it is, making it a lost smaller.
  • but price the licensing for those products. Unless there's maybe 5 users, he's not installing all that. My company did a Linux->NT migration as I walked out the door and Exchange alone ran us a few thousand dollars for 50 users. (sorry, I don't pay attention to prices, I tend to sign off on anything remotely reasonable)

  • First to all of those who are saying that "You've lost to MS, blah blah blah, give up.", and that we "wouldn't be scrutinizing th results of the tests as much" if we had come out on top. I totally agree people are way more motivated by loss that they are by success. Just look at MS, everyone must admit that they are preforming any miracles. They feel safe where they are, and invoation is a risk. Same thing with car companies, telephone companies, and electric companies.

    Now as far as the results the article itself goes, the author admit's that it's advocacy. It does bring up a good point. Being able to crank out a bagillion pages a minute is wonderful, but there are other things to look at such as the application of the machine, the availablity requirements, cost, and available hardware.
  • by J4 ( 449 )
    The whole affair is an attempt by Micros~1 to get developers to change their focus to something irrelevant. Unfortunately for them there is no one person in charge of development to delegate who works on what. Sure, Linus calls the shots on what makes it into the kernel, but he doesn't tell people what to work on.
  • Posted by My_Favorite_Anonymous_Coward:

    This is OT but... of course you are right.

    IE5 render my [homepage.com] homepage's javascript at least 3 times faster than N4.6. I won't use IE though. However, what you are saying has nothing to do with the discussion. They are talking about dymanically generating the webpage, which should be more important when javascript, xml are standardized.


    CY

  • Posted by My_Favorite_Anonymous_Coward:

    Hardware manufacturers have come to the conclusion that they don't like it when competing
    companies(read Microsoft) have them by the gonads. So, because Microsoft can't play nice, they
    entire computer manufacturing community has decided to rid themselves of the Windows liability.

    And of course, the software community has jumped on the bandwagon because Micrsoft poses a
    direct threat to their markets.

    Linux will win through strategic importance, NOT through technical superiority.


    Hear you hear you! I totally agree with this point. As an sidenote, as far as this generation of computer titans hasn't retired, there are always some company will went out their ways to stick it up Bill Gates ass just because they hate each other personally. What's he name in Sun for example.


    • "Why are you doing it? How much better can you eat? What can you buy that you can't already afford?"

    • "The future, Mr. Gitts, the future."


    Nah! I doubt any of these Billianires will go politic. They will be more facinated by going at each other. And Liunx is a very good candidate to be choisen as the weapon. That's one up for Linux besides the quality.


    CY


  • while you are right, you are STILL talking from the perspective of a technical guy. to several people in managing the numbers simply won't mean a thing. all he cares for is the conclusion, the simpler the better.

    so the REAL point would be:

    a) yes, NT beat Linux.
    b) in a very specific setup
    c) with Linux coming in second, but with benchmark values much larger than our company will ever need.
    d) as soon as you figure costs into it (not licensing, but TOC), Linux wins.

    I'm positive that anything even marginally more complicated and/or technical than this will fall on deaf ears anywhere outside the technical departments.


    one more unrelated remark: people CAN do both, code and advocate. my personal contributions are small in both categories, but it makes a lot of sense to me to talk about what you're doing, instead of going with one or the other exclusively.

  • by Tom ( 822 ) on Monday June 28, 1999 @05:51AM (#1828444) Homepage Journal
    I hate to say it, but I think we lost this round, M$ wins.
    no, I'm not talking about the benchmarking. I'm talking about the non-acceptance of them. remember that M$ is a marketing company, and little else. no matter whether it's right or wrong, non-technical people simply STOP listening to technical details after a while.
    that point has been reached. when the Linux community screamed out about the first test, people listened, even in the mainstream. but now, for all THEY care, a new, fair test has been conducted, and any continued discussion on our part will appear as whining - no matter if it's legitimate.
    just look at all the anonymous coward postings on /. for examples.

    fact is, M$ won this, because they can a) tell everyone how NT is superior to Linux (ignoring the fact, that this has only been proven for one specific setup under specific conditions) and b) point a finger at us for "whining" (ignoring the fact that we may have justified criticism).

    maybe they were aiming at this goal all the time, maybe they're just picking up the chance. but one way or the other, from the MARKETING pov (and we all know that that's all M$ cares about), this has been a huge victory for them.

    so I suggest everyone stop whining and go back coding. the time for criticism is past, like it or not. anything else you say be better damn constructive (in the sense of "we should fix this and it'll speed things up") or M$ will surely find a way to use it to THEIR advantage.
  • Ok, you spend your time modifying an existing robust, stable, fast, proven web serving system (dynamic content gateways, server and authentication daemons, remote administration facilities) to lose flexibility, throw away capabilities, and lower all-around usefullness so it can attain the utterly meaningless goal of saturating your network with content that doesn't ever change. I'll spend my time doing something worthwile.
  • What sort of stats do you have there? peak Hits/sec average page size, uplink size?

  • The site is distributed among numerous colocations, so there isn't "one" uplink.

    OK, how about stats for a representative colo?

  • by hawk ( 1151 )
    And understanding this, you are now allowed to use *the* programming language, Fortran. Swap a "D" for the "E", and you specify double precision while you're at it.
  • It's about time someone chimed in with this pivotal information.

    If you ask me it's criminally misleading for ZD to quote "hits per second" and not quote bandwidth too. People have lost all sense of just how gigantic 1000 hits/second is. 20 or so at this size will saturate a T1!
  • Any apache site expecting to serve tons of static pages should run squid in accel mode in front of apache. apache's cache (IF USED???) doesn't even come close to squid, and the apache team will readily admit it.
    I can't believe the redhat guys didn't embrace this strategy since it's well known that a key factor in IIS performance is the fact that it caches it's own static pages IN RAM when possible! This is a no-brainer. Note the big increase in performance between 256M and 1G.
    Personally, I believe, even given Linux's shortcomings in its TCP stack, we should smoke IIS with a properly configured server.
    As for ACTIVE content.. Done properly, mod_perl or php should kill .asp.
  • Damn! You're right! I remember reading about this in a deadly flame thread on comp.protocols.tcpip. the guy who instigated it did us all a huge favor, as it made everyone think in a state-machine kinda way, which should be elemental to anyone who knows computers.
    Sometimes we abstract so much, we forget these things are stupid and have to be slapped into civilty from the metal on up!

    Thanks for a great observation!

  • I don't argue with anything you've said. I forgot to mention the thread-per-interface thing and feel a little silly since I realize that was our bottleneck. What I meant was that IIS does a thing similar to squid, where it allocates a bunch of RAM assumedly user-configurable, and rule-based, for caching frequently requested pages within its own process space. Apache doesn't do this by default, which, IMHO is a good idea.
    As to squid and dynamic content, it's insanely configurable in that arena.
    I also implied that the apache implementation in question would incorporate mod_perl and php modules as a piece of the instance. so as to avoid the forking you spoke of.
    An additional thing one can do with apache so as to avoid needless spawning of new processes is to extend the lifetime of the daemon. RTFM on apache.conf.
    BTW mod_rewrite rocks!

  • For all the people who want to read the C't article but can't find it: they've placed it online and even translated it! Sure way to get a lot of hits...
    The link is:
    http://www.ct.heise.de/ct/english/99/13/186-1/
  • by BadlandZ ( 1725 ) on Monday June 28, 1999 @06:20AM (#1828454) Journal
    Hmm.. Internet server is diffrent than Intranet server. And, in such a case, I think it's also important to go back to some SAMBA studies.

    Basically, I think he's right, any reasonable Linux web server can saturate most avaliable bandwidth with _static_ pages.

    But, I don't think it's time to settle for that, I think we should go back to look at multithreading and why 4 CPU's gave NT more of a boost, and what that means to what needs to be done in Linux. And, I think that this study also shows that "internet" isn't the problem, so let's look at some faster stuff in the networking world, like SAMBA needs on an "intranet."

  • Why shouldn't we! MS is going to be beating us over the head with these numbers for years to come.
    I still can't find the specs on the tests. Is it a secret?

    check this out! [microsoft.com]

    I still want to know why MS was able to use the four partition trick when we could use any new code?
  • Alright. Fine. Linux is not good at striping across multiple NICs (or at least across 4). IMHO that's not a normal thing to do anyhow...

    There are really two parts two heavy-duty serving: files and web.

    a) Web content
    You don't need 4 100Mb NICs to serve real webcontent (no, corporations using "intranets" to be buzzword compliant doesn't count. when i see a serious intranet implementation i may believe it. till then it's just buzzword compliance ;).

    No one serves more than cdrom.com, and if they're the only ones straining 100Mb (on 100% static content, at that) 100Mb is good enough fer anyone. Next issue: dynamic content. I want a fair benchmark of IIS and (mod_perl) Apache on PERL and ASP (using mod_asp).

    How good is mod_asp ennyhoo? Does IIS do PERL? Does anyone have serious site implementations (on the order of /. or Excite or similar mad dynamic pagegen) that could be forced to work on ASP or PERL under both IIS & Apache? Would be fascinating

    b) File serving
    I'm still not convinced in the real world you need more than a 100Mb card -- if your fileserver can sustain 7 megs/sec (realistic on ethernet, peak is 10 megs/sec) you're doing pretty well with that RAID array, since no HD i've seen can do that alone, especially not when doing simultaneous serves (and therefore not reading contiguous blocks). I'm bothered by the SMB test setup in that sense -- if it was tiny reads out of the cache, big deal. I wanna know who can move real volumes of data, and fast.

    Of course, I know the answer. Sun, and after them SGI's Origin2k boxes (the fastest Windows networking fileservers in the world ;)....
  • by Brad ( 3629 )
    MS Won this round. No complaints or equivocations. We learned our lession after the first round of tests and started to improve. The absolute best thing we can do is only qualify, and as a last resort ignore, the soon to come MS attacks. This analysis is a good start to qualifying them, but we can't be content. Linux is not invincable, nor will it ever be. Take the this in stride and don't whine. Venting your frustration here is as good as walking into the hallowed halls in Redmond and saying "I'm upset, grind me under your heal please. Take my pride in my OS, it isn't as good as I thought it was!" Domination is just around the corner, but only if you spend your time working towards it rather than saying "It's (still) not fair".
  • by Ranma ( 3995 )

    If the ZD numbers are to be believed about NT's performance, and I see no reason to disbelieve them, the NT server that ZD tested should be able to serve 359.9 million hits per day. According to http://www.microsoft.com/backstage/bkst_cs_support online.htm Microsoft Support Online gets approximately 2.3 million page views a day. Even supposing that each of those page views generates 100 hits each, that would mean that Microsoft Support Online only gets 230 million hits per day, far under what the tested NT server can do. Theoretically, just one NT server like the one that ZD tested should be able to handle this load. Microsoft uses 6.

    What's that, I hear someone scream? But Microsoft Support Online involves dynamic content? Well, the ZD test was only about static content. I'm so glad to know that it was relevant to the real world. Aren't you?



    Haha, I love it!


    - Riko
  • If you want brainless system installs, look into Ghost by Symantec. It is a dos program that does disk duplication. It is completely configurable from the command line. With a bootable CD containg ghost and the disk image (compressed) an entire machine can be installed in less than 10 minutes. Assuming they have a decently fast CD-ROM of course.

    So it's like setting "gunzip | dd" as the init program of a bootable Linux CD, only $40, right?
  • An assuming you aren't in an NT domain environment where security has any priority. SID changing programs may be sufficient for workstations, but maybe not. There is no way you can Ghost domain controllers, and I personally would worry about any ghosted NT machine. All hail the registry.
  • Has anyone performed Linux vs. NT benchmarks with dynamic content? Many have said that it is much closer to real world. Now how about some numbers?
  • where can I find it?
  • My $0.04 worth...

    1) I use Linux, FreeBSD, Solaris, MacOS, Netware, WinX, and occasionally OS/2. Frankly, I care close to nil that NT "came out on top" in the NT vs. Linux test because most of the platforms I use are based on U*ix. Whether a test shows one platform over another to be "faster" has no bearing on my decision to use a certain OS-- I am still using Linux, FreeBSD, Solaris, MacOS, Netware, WinX, and occasionally OS/2. Nor am I bitter that under one very particular and controlled environment Linux "lost." The "real world" is not controlled. ;-)

    2) Linux performance is not at the bottom of the U*ix pack. I believe (for me, that is) that Solaris has been relatively good in terms of performance, yet Linux still manages to best it. I place *BSD at the top. For stability, however, Solaris really takes the cake. But who really cares?! Anything is better than trying to run a Q3TEST server under WinX! =)
  • So far what I'm hearing is Linux is much faster if either
    1) you serve dynamic content
    2) you serve from a large set of pages & freqently miss the cache,
    and even when serving static pages, your internet pipe is the bottleneck, so Linux won't slow it down.

    But what about the file serving results? Seems like Linux would still lag here. You have, say, 100bT, it's not dynamic content, and you can pretty easily generate a huge amount of traffic on just a few files.

    Anyone have any insight on this?
  • oh, please. I *have* installed NT (what a mess), and most of the people in my office use it.

    You can tell the NT developers because every month at least one of them is down -- "I'm reinstalling NT". Not "NT locked up", but *reinstalling*, because it futzed up so bad there's nothing to be salvaged.

    I used NT as long as I could stand it, which wasn't long. Yeah, it's "stable", if "stable" means you don't use it, you don't change any configuration, and you don't install any software!

    Meanwhile I have five linux boxes running & the closest I've ever come to reinstalling is copying files when a disk crashes.
  • (Yes, but you can't install NT via FTP...)
  • Why the "pseudo-gui"? Because Drive Magic (by the partition magic folks) uses FreeDOS and a pseudo-X on top of that for its pseudo-gui.

  • SMS. Hmm... bootp? tftp? (can't have SMS w/o SQL Server...)

    MS Sql: Postgres? What about the freebie licenses for Sybase ASE?

    Exchange: copy what they've done at ZMail, using PHP & MySQL. Oh, and what about all the non-Internet standard stuff that MS-Mail supports? Your features here are probably most internet mail users' bad features there...

    SNA Server: Hmm... I would just use X3270... software to actually talk SNA? tn3270 to connect?

    Plus, except for SNA Server, for a working Back Office setup, count on one machine for each functionality area (SMS (with its own SQL Server), IIS, SQL Server, Exchange). Which sort of means not buying the Back Office license (because you'd need one BO license for each machine...DOH! might as well just buy the individual licenses for the products), NT client licenses (I'm glad Linux doesn't have THAT feature!), etc. [for the BO license doubters, the BO license isn't any different from the Office license, in that you can't install the parts of the package on different machines...]

  • I notice that the person who wrote the web page [alfred.edu] did not have knowledge of the average file size in ZDNet's webbench, forcing him to fudge around.

    The average file in ZDNet's WebBench (You can download it, if you wish to) is approximately 10kb big. 10342.3 bytes, to be exact. Use this number when reading the web page [alfred.edu].

    In other words, a single-processor 256meg box can saturate a T3 with plenty of room to spare (107 MBps). The four-processor Linux box will almost saturate an OC3 (150 Mbps).

    - Sam

  • I think, the really important point here is that Linux development should not get distracted by marketing hype.

    When M$ comes up with some pointless test set, should we really care about it? Sure, the community can make a point out of showing M$ (and the world) that we can beat M$ any time; M$ comes up with a challenge and we tweak the software until we beat them. But is it worth it?

    Linux is lean and mean, because its development is largely driven by technical, not marketing factors. Could this be the beginning of an attempt to drag the Linux community into the same marketing-driven development that leads to the bloated software sold by M$?

    Chilli

  • The objections were because it seems a waste of CPU and memory when serving up static content to use technologies that are for generating dynamic content.

    --Andy
  • Whenever I see a benchmark I am amazed how many people just don't realize that speed alone is not the issue.

    Give me some cash and I will upgrade your hardware for you and you'll have better performance. But no, I cannot increase stability (perhaps a little by picking the right hardware, but there's plenty of stuff in any OS'es kernel that relates to stability and has nothing to do with drivers etc).

    As long as Linux is stable, it doesn't matter whether if it is a little slower.

  • That statement assumes that Apache will never improve. That's certainly not true. Apache will be able to scale with bigger loads, given some patches and a faster, well supported configuration (such as that 1.5 ghz Pentium V processor), etc.

    If you throw faster hardware behind Apache, you'll get better preformace than NT on the old machine.

    Apache isn't locked into a speed, like DOS's 8.3 char filenames or 640k of RAM in DOS or Apple ProDOS 32meg disks...

    IIS vs Apache will make each other stronger and better preforming (honest!)
  • That's wrong.

    Almost all large sites use Dynamic content. Dynamic content is much easier to setup, change and work via. user preferences in a flash.

    Static Content is mainly only used by smaller organisations or people, that can not afford a server / service that allows CGI, Perl or ASP processing.

    Most people don't realize how powerful dynamic content is, and how little negative preformance drag their is, on a properly set up scripts.

    In majority of time, static content on the web is actually cache of stuff produced dynamicaly, before serving out.

    These benchmarks seem pretty stupid. What's next, Ford comparing cars to Izuzu because the Ford Ranger has a maxium speed of about 124 miles per hour versus the Izuzu Trooper who speed maxes out at 105. Really, how many of us drive at speeds over 100 miles per hour? (Besides you ppl in the midwest). Not many.
  • Why does Linux need to win through technical superiority? Microsoft didn't.

    Because the linux community doesn't have the financial clout or the stomach to conduct the same sort of marketing campaign that micros~1 has.

    --
    A host is a host from coast to coast...

  • I really don't see why this article was written.

    Why not just accept the results (as long as the Linux community agrees it was fair - and this is acknowledged in the article) and go back to improving the kernel, instead of now saying that the figures arent applicable in real world situations. Noone said anything about this after the first tests, now suddenly the test means nothing because noone likes the final outcome.
    Going into a long explanation to water down the importance/application of the tests to somehow save face looks really bad. Whether or no the results can be seen in the wild doesnt matter as NT still proved better.

    (Sort of) analogous to defending driving your Citroen 2CV as opposed to a Merc cos you can only drive at 120km/h on the highway :) (lets not extend this metaphor)

    Its generally accepted that the Linux kernel's TCP/IP implementation aint the best there is.

    Just Fix It. (Or use FreeBSD *hide* ;)
  • I agree 100% :)
  • $40 per machine, you cannot use the same license on more than one box. And you also need GhostWalker (to change the NT SID), one per machine of course. I belive that we bought a site license for Ghost, GhostWalker and GhostMulticast for about $10,000. GhostMulticast is very cool: you boot all the machines you need, and one machine broadcasts the image file to all of them at once. Makes setting up 50+ machines very easy.

    (Offtopic: Why the fsck did Symantec have to make Ghost an ugly pseudo-gui? Argh)
  • Oh, and I forgot: Ghost should work with any OS. If it understands the filesystem (FAT,NTFS,maybe HPFS) then it will use a highlevel (file) copy, if it doesn't understand the filesystem (ext2, ffs) then it will copy the disk/partition block by block.
  • I'm curious... what are your objections to PHP and ASP?
  • that page on microsoft was quite interesting. they were saying things like "we are 650% better on this than Linux/Apache" when their graph says N/A for those figures on Linux/Apache.

    interesting...

    ChiaLea
  • Something that seems to be completely forgotten about is that the test machine Mindcraft used had, if I remember correctly, 4 NICs in it. What about the possibility that the whole reason for the NT numbers being bigger is that NT simply supports multiple network cards that much better, with Linux support being rather basic (akin to the very basic SMP support in 2.0.x kernels)?

    They tried the tests again on a single processor machine with 256mb RAM. Did they try them on a machine with just one NIC to check a possibility like this? Surely nobody can suggest a single processor machine with 256mb and 4 NICs is a configuration that any sensible person would consider.

    And if these tests are based purely on static content, then so what? I don't need a quad-Xeon to saturate a 10mb ethernet (which is a waaaay faster link than the vast majority of sites out there have) - my P133 manages it quite happily, with around 80% CPU free.
  • Hmmm. Was talking to a friend at work (I work with Linux, she works with NT in a different department) and she spoke with horror at finding an NT machine that nobody had restarted in eight months. I actually thought this was a pretty good showing for NT, until she added that after rebooting it didn't come back up again :-)
  • I believe that was 9x, not NT, although I'm ready to be corrected. Never seen a Windows machine run that long, although we did have a 98 machine at work make it to 35 days (it was doing nothing more involved than putting our label printer on the network...)
  • Unlimited Bandwidth is physically impossible.
    You can talk about BIG, BIGGER or HUGHE but not infinitum or unlimited.
  • Here's one for you:

    A client had a Netware server crash today (new motherboard, new problems). One particular application that is heavily used in the office lives on that server. When the server crashed, so of course did that app, on everyone's boxes in the office.

    Here's the problem: it didn't quite crash, it hung. And NT was unable to kill it. Task Manager tried and failed. The only way to kill it was to reboot each box.

  • Everyone is rightfully screaming about non-realistic nature of these benchmarks. I am sure my idea is by no means new, but right now I see no big disadvantages why it should not be used as a true benchmark: given that all sight *log* all attempts, why not take these big logs from some highly used sites and then *replay* the log on actual servers. This would simulate what actually has happened in real life for say month or two.

    The only trouble I see so far is that the benchmark site should *not* have fully used it's bandswidth during the period, because in this case we won't add up those users who failed to fetch a webpage. Although given simple math we can see that GNU/Linux+Apache can saturate any line with static pages, we could get some real world results using this approach. Comments are welcome.

    AtW,
    http://www.investigatio.com [investigatio.com]
  • mod_asp? Where can I find this? I'd like to learn ASP just for the heck of it, but I'll be damned if I'm going to run an NT/9x machine to do it. I looked in the Apache module registry, but the only two I found were (1) commercial or (2) for Windows apache only and at version 0.1.

    Please email me if anyone knows where I might find such a beastie.
  • If you want brainless system installs, look into Ghost by Symantec. It is a dos program that does disk duplication. It is completely configurable from the command line. With a bootable CD containg ghost and the disk image (compressed) an entire machine can be installed in less than 10 minutes. Assuming they have a decently fast CD-ROM of course.
  • ``NT Boxes don't crash often at all unless you have shitty hardware.''

    Heh heh heh. I'll call the NT admin where I used to work and tell him that the HP Netservers that they bought to run NT were ``shitty hardware''. He couldn't even get NT installed without it bluescreening.

  • So, the difference between NT and Linux only shows up given unlimited bandwidth. That doesn't suprise me at all. Think about it. MS has the time and manpower to make their server use more bandwidth than they could ever use. Linux developers on the other hand do not much care how well the server performs beyond the capacity of their lines, because they don't care! There is no itch to be scratched in terms of development, because the system works fine for what they ask it to do.

    In fact, it wouldn't suprise me at all if MS worked *very* hard to make benchmark results like this, even though they don't mean anything. Why? Because at MS, appearances are more important than fact. It's all marketing.

    Don't worry about Apache not being able to handle extreme (perhaps impossible) amounts of bandwidth. If the day ever comes when it's important, it'll happen! Somebody will find that Apache just doesn't cut it anymore and will fix it! Or maybe they'll fix the kernel if they need to! That's the beauty of Linux and open source.

    So, would you rather spend thousands for an operating system that could use more bandwidth that you could ever pay for, or spend *nothing* for an operating system that will handle just about anything that you can throw at it? I know where *I* want to go today and it ain't with MS!

    There. Now I feel better. :-P

    Ben
  • WHERE DID THEY GET THOSE NUMBERS?

    I ran ptester against my K6-233, 64MB ram, running apache, with a single 10mb card. Serving a 2K page I got over 25,000 hits/sec. I can get it to pump out almost 18,000 hits/sec running a php3 page (with X and netscape running remotely) that is 17K out output.
  • He means 2 hours because he is waiting for the whole OS to come down his internet link. In my experiance you can get Redhat installed in under ten minutes from flipping the power switch That is with selecting packages and configuring X. K6-233/64MB/24X cd-rom. Two hours might be a good time to have a completely configured and custumized system.
  • I just reported what the software told me. Talk to the makers of the software how it is possable.

  • I'm right with you that spending the money on hardware is the right thing to do. However, throwing around "BackOffice" as comparable to the free stuff that comes with Linux is bordering on the FUD Zone.

    BackOffice has:

    * System Management Server (I know, who wants it, but is there anyting eqivalent for Linux even if you did?)

    * MS SQL - It's not Oracle, but does MySQL come even close?

    * Exchange server - Again, the MTA in Exchange ain't sendmail, but as far as the mail/groupware/calendaring feature set a fair comparsion would only be Netscape's commercial products.

    * SNA Server for talking to older mainframes. (Some people need it.)

    * Some other stuff I probably forgot.

    Now it could be you don't want this stuff. But if you did, you'd have to buy it, even on Linux. (And, it would probably be more expensive.)
    --

  • If your big iron actually talks TCP/IP, I don't think SNA Server is needed (although modern SNA Server versions might do other things, like screen scraping to a web page).

    Older IBM stuff didn't (or didn't always) use normal LAN protocols, so SNA Server could be used as a gateway between your LAN and the SNA Network. (Someone else could probably explain this better, and use all the correct IBM model numbers!)

    As for PHP+MySQL doing what Exchange does - you're right it *could*. However, most places, when given the choice between buying a calendaring package or writing their own would probably buy one. You're argument is like saying you don't need a RDBMS because you've got GCC.


    --

  • Don't get me wrong - I'm not saying the BO stuff is the best or even necessary in many cases. Just that the usual free stuff isn't really in the same catagory as BackOffice or commercial products announced or shipping on Linux such as Oracle, Sybase, Tivoli, Unicenter, Netscape, or Domino.
    --
  • thttpd [acme.com], a small web server for linux (~7000 lines source) should easily outperform the iis on the
    mindcraft "benchmark" due to its non forking
    behaviour. See the comparison chart on this page [acme.com].
  • I was wondering... I seem to recall that the Mindcraft tests utilized the processor affinity settings that one can make in the registry under NT. If that is the case, the test results make a lot of sense. It would be a simple matter of assigning processor affinity to services in such a way that the more critical (i.e. performance hog) services would have dedicated CPUs and wouldn't step on each other.

    I don't recall any utilities being available under Linux to assign processor affinity to a PID etc. while utilizing SMP support. Does such a beast exist? If so, great! If not, that may be something nifty to have as a feature...

  • Isn't that supposed to be impossible? I thought all windows machines were supposed to die after 47 days.
  • I'm not certain I'm reading that right, but it means that microsoft compared NT using dlls to essentially extend the web server itself to produce the output to linux using cgi scripts? Maybe it's just me but it's kinda obvious that NT is gonna win in that test. CGIs require a fork()/exec() which is a lot slower than just pumping out the output. Now NT ISAPI vs. mod_perl or mod_php might have been just a bit more accurate.
  • Thanks for posting this. It's awesome to get some real life data to throw into this. I wish that more people would do this. Thank you.
  • If someone tries to convince, say, your boss that you should use NT instead of Linux for your website (say you're hosted by a partial T3 and only serve static web pages) and cites these results, will you say:
    A) "You're right. These benchmarks prove that NT is superior to Linux and we'd be foolish to go with Linux now, though if we work real hard coding at Linux et al. linux will be able to beat NT at benchmarks so we can switch then."

    or
    B) "Actually, if you look at the tests rather than the soundbyte about them, you will see that they prove that a Linux/Apache system will be waiting on our internet connection anyhow, and Linux is a much better value protosition than NT is, remote administration to a box with no mouse or monitor is extremely easy, remote displaying comes standard (no need to buy citrix), you get greater stability, and source code that uses open standards so that if life takes one of its many unpredictable turns and Linux is no longer the best solution, we won't be trapped into expensive solutions that we don't want to be in due to vendor lock-in."

    So do we give up, or work with what we've got? I suggest that everyone who's whining about "let's stop whining and get back to coding" at least say what you mean, "Let's just give up and try to beat microsoft on their own game instead of doing what we want." The people who do the coding are still doing the coding. This is about advocacy and marketing. I suggest that the people who do Linux Advocacy/marketing don't stop simply because everything didn't go perfectly. These numbers are only a defeat by microsoft if we let them be.
    Remember, there is more than one part to the Linux community, just as there is more than one part to the body. Just as I wouldn't suggest that those who do documentation stop doing documentation and start coding, I also wouldn't suggest that those who do marketing stop and start coding.
    It takes all sorts to run this world, and we shouldn't start neglecting any part of it, marketing included. Microsoft is going to try to put as much spin on these tests as they can. If we stand by and do nothing, we'll be as guilty as they are of the spinning. Sins of omission are still sins.

    I don't suggest that we do it defensively, though. I suggest that we do it confidently and aggressively. act like we're in control. That's the neat thing about self-confidence. If you act like you know what you're talking about, people tend to believe you. So why should we give up acting like we know what we're talking about.

    We've got some test results which prove that Linux can handle the needs of 99% of the world. Why exactly should we hush that up in favor of letting it be thought that Linux is slow.
  • by DLG ( 14172 ) on Monday June 28, 1999 @06:09AM (#1828505)
    I have been running a linux webserver in various incarnations and machines since 1993. I started on a 486 with some decent scsi (vl-bus) and 8 megs of ram, with a 28.8 hayes modem, matching the isp's.

    I generally ran 3.6 kilobytes per second of bandwidth at the time, which wasn't bad for a 100 dollar a month dedicated line.:)

    Now adays we use an alpha (for past 2 years) with 128 megs of ram and a T1. One day (and I really mean one 5 minute period) our line was saturated by what seemed to be a webcrawler from taiwan which literally downloaded the entire site. It filled our pipe. That is the only time that such a thing happened and my apache server didn't even blink. I wouldn't have known but I tend to analyze my peak times.

    After we decided to upgrade the software on the alpha (redhat 4.2 being a bit old ) I moved the entire site over to a k6-2 running at 300 with 156 megs of ram or something... That hardware cost about 700 all together. (the alpha cost 5000 2 years ago..) It performs fine. I have never seen the apache webserver crash or not respond based on overusage although sometimes dynamic content will slow down (and that is because we don't really optimize for speed). In any case the performance is great for the price. I have bought one copy of redhat 5.0 and have 3 servers all running great. (Finally retired the 486 cause it was hard to find vlbus hardware)...


    The basic fact is that the webserver has NEVER been a crisis point or even a decision point. When we went to shttp we went to stronghold as it was the only easy solution at the time. It has never caused us problem.

    Further we serve realaudio, handle telnet sessions. handle email, handle pretty much any protocol a client wants. mysql, php, perl, c compilers...

    There is literally nothing that we have needed to do that our linux boxes haven't handled. I am so comfortable with them that I have placed linux boxes as controlers for permanent automated exhibits, and have so far only had one hardware crash even.

    The notion that NT would be easier for me to maintain, easier for my clients to interact with, or in any way a more efficient use of money is absurd. I don't even know what they charge for NT, but I believe I would rather purchase an extra 256 megs of ram per machine, or a faster processor, then pay to be a beta tester.

    I know I am preaching to the converted here, but I started laughing the first time I saw the comparisons. While I WOULD like linux to catch up with SMP and such, and certainly would like to see it scale better, I happen to like being able to buy a 500 dollar computer when I need more processing, and have it up and running in 2 hours (and that because I generally do a redhat ftp instalation)

    I betcha you can't do that with a win NT install.

    While I do lookforward to an OC-3, I imagine the price of leasing it will be expensive enough that I can afford any number of widdle iddy biddy linux machines to serve up pages. The days of big iron may not be over, but I don't need a mainframe to do something as brain dead as serving static pages.


    D
  • It means low performance when everyone else can saturate even higher bandwidth connections.
  • I certainly hope you are right. If so, well... that's what Open Source is all about.
    --- Michael Chermside
  • > I hope they redo the test next year and see how much of an improvement team Linux can achieve....

    Hey... why hope? Mindcraft (as they took such pains to point out) is a neutral party. That means that we can hire them to re-run the test. In fact, perhaps we should announce that we intend to re-hire them, at some specified date (any suggestions?) to perform a re-test... same rules as the second test, which means Microsoft gets to tune NT however they like, but we do too. This challenge is just the kind of thing that the media might pick up on, and the results of such a contest are certain to be picked up by the media.

    The only hard part is that we'd actually have to beat them the second time round. And that's only going to happen if all the hype about the advantages of Open Source are true. It is true, isn't it?

    If anyone likes this idea, let me know.

    -- Michael Chermside, mcherm@destinysoftware.com

  • It isn't directly relevant, but then neither is the original test. In the basic test with a small number of pages being hit, the server can get everything out of a RAM cache and you get a cache hit rate approaching 100%. The random/10E6 test is designed to push the opposite extreme, forcing most hits to miss the cache and show how well the server performs when it mostly has to hit the disk. Neither extreme is in itself likely in the real world, but together they give you an idea of how performance varies so you can evaluate where your site falls along the curve.

  • Multiple NICs all working at the same time is actually quite necessasary to have working fast and seamlessly. Look at a router / firewall box. Also look at the situation of multiple local networks served by one application / file server. The latter is the one where we need the SMP to work seamlessly for multiple NICs.

  • Personally I think Linux scales much better than NT. Look at the $$$. One Super NT system with special $100,000 Quad processor System, OS, and IIS software versus 10 $3000 garden variety Linux boxes with 2 Linux boxes to distribute the load between the ten web servers. I've now got three times the load carrying capacity, fault tollerance and one third the price.

    Before you say that fault tollerance isn't important: downtime costs big $$$. In a simple office environment. If you have 2000 $25,000 a year people working off of one file server and it goes down for 30 minutes durring a day. Assuming half of them were using the system at the time, you have 1000 people now unproductive for atleast thirty minutes. That's $6,250 in simple lost productivity for wages alown for that one incident. Add in lost updates to files that hadn't been saved before the server crashed, Office overhead costs, etc. You will quickly find that the real costs are over twice as much. It's unfortunatly very hard to quantify them so usually they don't get looked at, and if they do, the numbers are very unprecise. Unfortunatly the losses are still there even if they aren't totalled up. The interesting thing is the numbers above are for one incident only. If the system goes down once a month, that's a minimum of $75,000, and easily as high as $150,000 a year.

    In a $100,000 a day E-Commerce site, if you lose the system for thirty minutes, that's $2,083 assuming an evenly distributed load over the day. Unfortunatly systems go down at peak times, so you'll be looking at a much higher loss of $$$. Easily as high as 5 times as much. To lose 10% of your business in one day hurts. Sure some of the people will come back at a latter time, but some will go elsewhere. You may even lose customers forever if it happens to often, or they may endup finding a better supplier.

  • I hate to complain, really I do.

    But all you have to do is put the link inside an anchor, we could click directly on it, rather than having to cut and paste it. Having said that, Here it is done for you as an example:

    See Http://www.ct.heise.de/ct/english/99/13/186-1/ for details, or click the link to go directly to the English translation of CT's Linux Vs NT. Article [heise.de].

  • by Victor Danilchenko ( 18251 ) on Monday June 28, 1999 @05:55AM (#1828513)

    While the article is rather interesting in and of itself, I think it points to a bigger issue: in computer industry in general, and in benchmarkig in particular, proper scientific tools and methods are often not used.

    Statistics is your friend! If some psychologists used similar methodology for their investigation, they would be laughed at -- I won't even talk about hard sciences.

    Yet in benchmarking, the perpetrators completely ignore representativeness of their samples -- this is all benchmarks are really supposed to be, controlled investigation of the performance of a representative sample of real-world computing activities. How can you investigate performance if you don't even try to account for various miscellaneous factors by using proper sample selection?..

    What can I say?.. The entire thing disgusts me. I would rant more, but I will simply go and sob in the corner about lack of scientific methodology in my field of choice.

    --

  • Yeah, the speed of IIS on NT and Apache on Linux doesn't actually matter in reality because the size of the pipe used will limit the speed to a much smaller number anyways, unless you have a ridiculously large pipe (multiple OC3s, anyone?). This difference will only matter if someone sets up a box with a ridiculously large pipe, which most sites don't have. Most sites which host large numbers of other sites don't have that kind of bandwidth. This all really cuts down the significance of the Mindcraft studies, even if they were perfectly legitimate. So what that Apache on Linux was slower than IIS on NT, because that doesn't matter in reality!
  • by tap ( 18562 ) on Monday June 28, 1999 @07:11AM (#1828515) Homepage
    I remember hearing about how linux follows the TCP spec and uses slow start, but NT doesn't.

    When a TCP connection is first established, and one machine wants to start sending data to the other, it isn't supposed to start sending as fast as it can. Rather, it's supposed to send only one packet, then wait for an ack, then send two packets, etc. Otherwise you end up with congestion rendering the network useless as you approach full bandwith. Van Jacobson (remember VJ header compression from the days of SLIP?) has a paper on this.

    Since the mindcraft test has unlimited bandwidth, no packetloss, no slow modem connections, etc. TCP stacks that don't do slow start properly can send out more data than ones that do.
  • by evilpenguin ( 18720 ) on Monday June 28, 1999 @10:28AM (#1828516)
    To me this highlights the one true evil force in the world. No, not Microsoft. Ignorance. Dr. Science once said "Ignorance is bliss, and tonight, we're a happy country." I could not agree more.

    The basic problem is not Microsoft. Not their products, not their technology (or lack thereof), not their marketing people. The problem is the number of people who do not think critically. The NT benchmark does not lie. It simply tells a very narrow slice of truth and "positions" that truth to show NT and IIS in the best possible light.

    To me, the one outrageous thing in Microsoft's benchmark page [microsoft.com] is the chart that shows total cost of ownership. Now, I'm not a CIO, CEO, or CFO, but it seems the me that cost per transaction per unit time is completely irrelevant. What matters (as the author of the article we are commenting on here points out) is cost per transaction and can you handle your transaction volume?

    When decision makers look no deeper than the cooked figures from NT's benchmark, when they fail to see if the scenario represents their business and technical reality, then their business gets what they deserve.

    What the "Microsoft Advertising for Linux" [alfred.edu] article does that is lauditory is it cuts through to a core question. Which is cheaper given a certain use case? It wisely does not answer, but merely points out that in most cases, even in most intranets the Mindcraft/Microsoft scenario is extremely unlikely and that Linux/Apache on even limited hardware will handle most loads anyone would reasonable expect.

    It also wisely points out that if you are a site in the tiny fraction that will exceed Linux/Apache's capacity, then by all means use NT/IIS.

    Then, one more dig of my own at the TCO figures. Even if we grant the validity of the figure cost per transaction unit time (which I do not), what happens if you set up ten servers, or twenty? Linux costs nothing more for ten servers than it does for one. I haven't the time to see how many servers it would take, but there would come a break even point and then a point where Linux/Apache is cheaper even using the dubious measure in the Microsoft study.

    Finally, I just want to congratulate the author of "Microsoft Advertising for Linux?" for showing the value of just trying some of your own math and asking, "Hey, is this reasonable?" If we all did this routinely regarding everything from computer bechmarks to medical scare news stories we would live in a much saner and less stressful world. Whichever operating system you buy.
  • I notice in the article from Pc mag article,
    they mentioned linux originally trounced NT, and then they went away and worked until they could
    beat linux. So, linux coders fix the problem, and all is well.

    For a specific problem, if you through enough money at it, you can usually come up with a hack that will make you come out on top. Simply shows that we were a bit complacent perhaps..

    Of course, MS ignored the fact that this version of NT only exists in their labs. The version out there that everyone is using is still getting its but kicked by the linux boxes out there!

    I also think that it would be interesting if they had mixed the tests. Perhaps had the server do file and web serving and perhaps printing at the same time. Linux can handle all of this and still fly..

    It's been my experience NT gets a lot slower..

    Ryan
  • "I am assuming that the tests that ZD made were meant to mean something, so I won't entertain the idea that they used an average file size of less that 1K. Given that, It is clear that the numbers that ZD's tests produced are only significant when you have the equivalent bandwidth of over 6 T1 lines. Let's be clear about this: . . . if your site runs on 5 T1 lines or less, a single CPU Linux box with 256 MB RAM will more than fulfill your needs with CPU cycles left over."


    What percentage of the Linux using world would these tests pertain to then? If I was the person making the decision on which OS to use for a corporate website (with T3 connectivity), I would take into consideration a little bit more than one benchmark's POV.

    Ever wonder why your car insurance company makes you get three estimates for the $600.00 dent that bambi left on your hood?



  • "We have, on the other hand, never heard of an NT support contract supplying NT kernels specially designed for customer problems."

    -Article in C't computer magazine. (link is around here somewhere.....).
  • Here's the link. Quite interesting actually.
    http://www.ct.heise.de/ct/english/99/13/186-1/
    Or click here [heise.de].
  • The German c't Magazine has also conducted a comparative benchmark between NT 4.0 SP3 with MS IIS 4.0 and Linux (SuSE 6.1) with Apache 1.3.6, both running on a Siemens Primergy 870 Web Server (4x Pentium II Xeon @ 450 MHz, 2 GB RAM with a 4 disc-RAID-5 system).

    (Mindcraft used a RAID-0 system, which might be even faster, but doesn't in any way guard your data if bad luck strikes - incidentially, one HD died during the test... :)

    Anyhow, the test showed again that NT is faster when it comes to serving static pages, especially when using SMP... but then again, the test also showed that Linux smokes NT when it comes to CGI - but they also warned that NT users might use ASP instead of CGI, which is hard to compare... still, it's funny that CGI under NT was rather weak - or is that a way of MS guiding people to use ASP?

    Another thing the test showed was that NT was only leading when it could serve all static pages directly from memory; it fell behind Linux when it had to read the files from disk due to insufficient disk cache...

    I'd tell everyone to go look at the results themselves, but I can't seem to find this story on their web server... they mention it being in the newest issue, but sadly they didn't put it online... *GRRR* : /


  • B) "Actually, if you look at the tests rather than the soundbyte about them, you will see that they prove that a Linux/Apache system will be waiting on our internet connection anyhow, and Linux is a much better value protosition than NT is, remote administration to a box with no mouse or monitor is extremely easy, remote displaying comes standard (no need to buy citrix), you get greater stability, and source code that uses open standards so that if life takes one of its many unpredictable turns and Linux is no longer the best solution, we won't be trapped into expensive solutions that we don't want to be in due to vendor lock-in."


    Well I think we are missing the main point and that is how stable linux is over NT...

    Example: At work we use Windows NT in the Tech. Support depart were I work. To prevent Windows NT from becoming unstable we have been almost completely locked out of the systems we can't even write to the local hard drive. Which make since with Windows NT but that means everything must be saved to the network drive again no problem unless you network drive is on a Windows NT server. In the last 5 weeks I have had my network drive disapear on me 3 separate times. To cope I have had to backup my bookmarks by emailing them to yahoo.com and leaving it on there mail server so when Windows NT eats my network drive again I will still have my bookmarks which I need to do my job at work.

    My point is who care if they needed 1 NT box or 2 Linux box to hold my network drive? Most user really don't care as long as there data is there when they need it which Windows NT 4.0 can't seem to do at least it my case.

    Robert
    rmiddle69@hotmail.com

  • From what I have read, it's not so much Linux's SMP that failed (although there are some performance issues there too). It is the fact that Linux doesn't really like using multiple network adapters in parallel.
    That seems to tinker in very low-level threading and SMP stuff which Linux has not yet perfected. Ironically, Linux probably would have kept up with Nt alot longer if they had removed 3 network cards and just let it do its thing with one.
    I think that getting SMP working well under Linux is a priority.. but how much energy do we really want our beloved kernel coders to spend on getting 4 100Mbps ethernet cards to push data in parallel? Seems like a pretty bullsh*t scenario to me.
  • HMMM, Rob, I have an idea, remember when we tested the new /. server, and banged the heck out of it. Why don't we do that one day for an NT server, and the next day for a linux one. We can do it at the same time and everything, then just don't tell anyone which was on one day and which was the other. I think this would tell us everything we need to know. Then again, I don't know if we could put slashdot on NT, but I think we could work out something that uses dynamic content, and would be a real world test.
  • ...Here at >named chnaged to protect the guilty.. let's say a large computer company we get approximately 40% of our hits per day over a 4 hour period. Between 10 and 12 and 2 and 4. Sure if you are getting a steady stream over 24 hours you aren't going to have many problems with either solution, but that definitely isn't the case.
  • My God, linux users are really pulling out the stops to mincr, re-mince, and doubly re-mince the results of these performance tests. I don't think they'd be so critical of the numbers and methodology had linux come out on top.

    Actually, most linux users that I know are quite level headed and take benchmarks for what they are worth. Of course, any community has its freaks. I do not consider the author of this one to be one of them -- he was illustrating a valid point.

    BTW, you seem to be pulling out all of the stops to stereotype linux users as raving lunatics who cannot stand to live if their favorite OS looses in a benchmark.

    And now this report that basically says that low performance is okay because your employees shouldn't be using an intranet that much anyway.

    Something tells me that if your employees are generating ~1500 hits per second on an internal web server, you have bigger problems than a slow operating system.

    Come on people, you lost fair and square.

    I won't dispute that, but a lot of people do not seem to understand that this was not a test of NT vs. Linux. What you can say is that IIS on NT beat Apache & Zues on Linux, when serving static web pages.

    Linux performance has always been at the bottom of the unix pack. Why do these tests shock you?

    Have you ever run Linux vs. Solaris on uniprocessor Sun hardware?
  • Point #1 actually brings up something I've had on my mind for a while.

    What about a code/distribution/architecture fork to attack the desktop market? Would it be desirable to fork the system to be optimized and designed for workstation use, and continue the current design/architecture/code tree for server use?

    It seems to me that trying to attack the desktop market with a system that is server centric is a non-technically-optimal idea. Kind of like putting a GUI on NetWare. One size may fit all, but it doesn't end up fitting anyone very well.

    The goals of a workstation and server as I see them are radically different. On the server side, how practical/important/useful are things like 3D Video and audio cards? On the desktop, these sorts of things are much more important.

    Similiarly on a server I would think task scheduling and prioritization would be handled more evenly than on a workstation. How about a system that can easily "get out of the way" of a game or other app that needs dedicated resources?

    Any thoughts?
  • by SpinyNorman ( 33776 ) on Monday June 28, 1999 @06:43AM (#1828539)
    NT beat Linux (on these benchmarks, at least), even when Linux was optimally configured. We whined for a retest with an optimally configured Linux, and got it - hey, we even got a lot of changes in the test setup - UP as well as SMP, different RAID controller, NT as well as Win 95 clients... BUT, we STILL lost.

    Rather than whining, I'd say there's a lot to be grateful for - mainly the fact that the testing has shown some prior bottleneck assumptions to be wrong, and has exposed the real problems which can now be addressed. Given the results, the press coverage has also been rather kind to Linux.

  • by dermond ( 33903 ) on Monday June 28, 1999 @06:32AM (#1828542)
    in the last issue of CT magazine, they also had a linux vs. NT web-server shootout. they used a siemens primergy 870 maschine (4 xeon 450 cpu's, 2GB ram, intel etherpro 100 nic, mylex dac 960 raid controler, price of the maschine: about 100 000 DEM ) here the some of the results:

    serving one static html page 4k size: NT and linux almost on par (linux ahead a few %) both systems answer 900 requests/s when hit with 512 concurent client process.

    with 8k size static page: linux is between about 5 and 10% ahead of NT.. at 512 client processes the linux maschine serves about 600 requests/s the NT maschine about 550.

    using a 4K page but selecting one random page out of 10E4 pages linux has about 830 req/sec and NT about 720. the linux line seems saturated where the NT line is bended down already: linux 15% ahead of NT

    random 4K static page out of 10E6 different pages: linux about 270 req/s while NT has never more then about 30 req/s. that means linux is some 800% ahead..

    now some dynamic pages. they used plain old CGI scripts with perl. no PHP or ASP. using all 4 CPUs linux answers 210 till 250 request/s while NT is around 60! that means linux is 316% faster

    same as above but using only 1 of the CPUs of the maschine: linux around 100 req/sec, NT around 25 req/sec. linux ahead by 300%

    if the script contains a sleep(3) at the beginning (to simulate slow database connection or slow client connections) the results are: linux increases the number of requests linear with the number of requesting processes and reaches about 80 req/s for 250 simulated clients. NT is saturated at around 7 req/sec. (in words: seven). linux wins with over 1000%

    the only time that NT is ahead of linux (about a factor of 2) is when using 2 NIC cards instead of 1.

    my interpretation of all this: i guess there are very few webservers where one would needs more then 100Mbit/s.. and then one would be propably better of with 2 cheaper systems doing load balancing.. given the extra reliabilty, remote managemnt, etc of linux and the better extraordinary better performance in most tests linux is the clear winner. doing fast CGI scripts is by far more importnat then to efficiently support 2 or more 100Mbit cards.. at least for 99.9% of all webservers or more..

    greetings from vienna, austria.
    der mond.

  • The german computer magazine ct published in their last issue (number 13) a test NT/IIS vs. Linux/Apache as a webserver. They uses a 4 processor Siemens box with a raid 5 as disk storage.

    Here is what they found out: (All numbers are estimates from the charts. They may be off, but not much)

    First, as soon as you have to use more than one network-adapter NT wipes Linux butt on static pages. Linux seem to be pretty bad in this area. Their guess is, it has something to do with the kernel and multthreading. This verifies pretty much the findings of Mindcraft.

    The main part however was dedicated to a slightly different scenario. They tested a few different things.

    The first test was serving a single static file of 4 kB size. They stopped measuring at 512 serving processes which corresponds on their environment to about 950 hits/s for bot linux and NT, Linux leading. With 8 processes NT leads 560 to 480 hits/s, from 16 to 32 processes they are roughly the same, from 64 processes on Linux leads by 20 to 50 hits/s.

    The second test was the same as the first only with 8 kB size. There Linux leads the whole way by 20 to 50 hits/s maxing out at about 550 to 530 hits/s.

    The third an the forth thest were again serving 4 kB files. In the third test the number of files (10000) all fitted into the cache, in the forth thest they didn't. In test number 3 NT is only a bit better with 8 processes (380 to 320 hits/s) ans then linux leads by about 100 hits/s maxing out at 820 hits/s to 720 for NT.

    As soon as NT needed to use the disk (1000000) its performance wasn't so good. It stayed at a constant 20 hits/s while Linux went from 50 hits/s to 280 for 512 apache processes. It seems NT doesn't like a raid 5 that much. It prefers a raid 0.

    The fun really started when the tried dynamic pages with Perl scrips. There the single processor apache was able to serve about twice as much pages than the 4 CPU IIS server. the numbers:

    NT 1 CPU: 30 hits/s
    NT 4 CPU: 55 hits/s
    Linux 1 CPU: 105 hits/s
    Linux 4 CPU 200 to 245 hits/s

    Then they added, to simulte database queries a delay of 3 s into the cgi script. The the performance for the NT server dropped from 30 to less than 5 hits/s. For Linux, the performance was linear to the number of processes, starting a 5 hits/s for 1 process going to 80 hits/s for 256 processes instead of the 105 for the version without the delay.

    Their conclusion [rough translation]:

    For a dedicated webserver with static HTML only, additional CPU are not worth the bother. Even on 2 Fast Ethernet Segments, the increas is only about 20%. CPU power seems not to be the limiting factor. [...]

    The realitively bad results for Linux with two network adapter indicate, that the Mindcraft result are plausible and NT and IIS are better performing then their free competition, if one want to play by MIncrafts rules.

    [Then some text explaining that 1000 hits/s is about then times the peak value they get on their server and those pages have to be static and cached.]

    As our test demonstrate, the Mindcraft results can not be applied to situations, where pages have to be generated dynamically, which is the case on nearly all serious web-sites.

    In SMP-mode Linux showed some definite weakenesses. Even kernel developer acknowledge , that Linus still has problems with scalability in SMP settings, specially if the load occurs in kernel modules. However, if the load is as with CGI-scrips, in user modules, Linux profits fully from the additional CPU. There is work done to fix these problems.

    For real applications as web-servers, Apache and Linux are already ahead. If the pages cannot be served directly from the cache, the situation is even better for Linux: Here the commercial products from Redmont are not even getting close the open souce projects.

    [They finish on telling that Mindcraft was right about the fact, that finding tuning info on Linux and Apache can be hard and do not rival a commercial support infrastructure. But once they got to the developer, they got help fast, whereas it Microsft a week to reply. They also mention that you wont get a kernel personalised for your environment from Microsoft.]

    [End of rough translation.]

    On the whole the article is a bit too pro-linux. They should at least try to compare similar web-application done as CGI and ASP. This would give some relevant information on how the servers compare for similar tasks. I personally don't care whether I have to write a perl-script or ASPs.

    Servus,

    johi

    PS: Not the article was not available on the web the last time I checked.
  • by LostOne ( 51301 ) on Monday June 28, 1999 @05:55AM (#1828552) Homepage
    It is interesting to see someone actually present some useful information regarding "Benchmarks". Note that the article does not refute the benchmark; it just puts it into a different perspective. It also puts into perspective the difference between a benchmark and reality. If you really think about it, how many sites can possibly get the volume of hits required to run into these numbers? I mean, how many sites are being served over a dedicated T3? Most are coming over load balanced T1s or what have you that are shared amoung numerous other physical boxes. (That is not to say that there are not some sites having such a load.) Even if you had such a site, you probably wouldn't want to rely on a single server to handle the whole thing anyway. (Single point of failure? That's asking for trouble.)

    Well, that's my 14 cents worth.
  • I run a medium sized web site on a p90 with
    pathetically slow hard drives and too little
    memory. Linux and apache of course... I'd like
    to see NT even run anything usefull on a p90!
    But that damn p90 sure can peg the T1 it's on
    with relative ease.
    but I digress, a loss is still a loss, and saying
    that linux still easily outruns 99.999% of
    anyone's bandwdith is just an attempt to sanitize
    the loss, ie. "linux is more than fast enough"
    Well that's the same as windoze, "windoze is
    good enough" for most people. [sic].
    nutsaq

...there can be no public or private virtue unless the foundation of action is the practice of truth. - George Jacob Holyoake

Working...