Forgot your password?
typodupeerror
Microsoft Software Linux Apache

Red Hat/Apache Slower Than Windows Server 2003? 628

Posted by Zonk
from the who-doesn't-love-some-delicious-fud dept.
phantomfive writes "In a recent test by a company called Veritest, Windows 2003 web server performs up to 300% higher throughput than Red Hat Linux running with Apache. Veritest used webbench to do there testing. Since the test was commisioned by Microsoft, is this just more FUD from a company with a long history? Or are the results valid this time? The study can be found here."
This discussion has been archived. No new comments can be posted.

Red Hat/Apache Slower Than Windows Server 2003?

Comments Filter:
  • by dtfinch (661405) * on Saturday May 07, 2005 @02:21AM (#12460471) Journal
    Looking at the first page of the benchmark report, I see that they're using the exact same setup as in their highly contested samba benchmark, with a specific ancient version of Red Hat running on a specific hardware setup that version is known to have performance problems on. They could have at least tried a different server last time, or a modern version of Linux. Under fairer circumstances, who knows, IIS might have still won, but this rigged benchmark has nothing to offer us in deciding which server is faster.
    • by PsychicX (866028) on Saturday May 07, 2005 @02:35AM (#12460540)
      Yeah...yeah...

      I just wish, just ONCE that somebody would do a fair evaluation, without an agenda to forward. But I guess that'll never happen. We all have bias...but surely we could at least attempt to get above that?
      • by Pinefresh (866806) <william.simpson@gma i l .com> on Saturday May 07, 2005 @02:55AM (#12460632)
        Well, if you really want to know, you could probably do one. It couldn't be too hard to put a simple one togeather, and it would solve the question for you.
      • by xiando (770382) on Saturday May 07, 2005 @04:15AM (#12460890) Homepage Journal
        I personally do not trust someone who claims to be "Veritest is an independent testing agnecy authorized by Microsoft to carry out the testing for applications developed on windows platform." to do a fair evalution of Linux vs Windows. If a company who makes a product gives you a huge pile of money at regular intervals and you are asked to compare that product to another product, who are you going to vote for? Who is your daddy? Sadly, money is everything.
        • by rben (542324) on Saturday May 07, 2005 @11:38AM (#12462295) Homepage
          Sadly, money is everything.

          Not so. If it were, there would be far less support for Open Source projects. Fortunately, as FOSS has demonstrated, large numbers of human beings are quite capable of being motivated by interesting problems and the knowledge that thier work will benefit everyone else.

          Be cynical if you like, but every day you use Linux or Open Office; every day you see a website served by Apache; know that it's because some people value contribution to society enough to donate their time and creative energies.

      • by eno2001 (527078) on Saturday May 07, 2005 @10:14AM (#12461902) Homepage Journal
        Actually what is needed is a public, non-profit benchmark competition. Both Windows and Linux enthusiasts are welcome to join in. Limit the contest to 100 teams of up to ten people. The 100 teams are all suprvised by the people who run the contest. The contest itself should make no money of any kind in order to keep away any monetary incentive. Hardware donations from the big players are acceptable with the understanding that the hardware will be returned after the compeition. In this way, the ugly little trait called "competition" gets in without any monetary incentive. At that point it's enthusiasts trying to outdo each other on both platforms. With this set up, you really get to test the performance of both OSes in a fair way because enthusiasts are likley to know all the tricks to get their OS and application to perform best. This means you'll likely see Windows outperforming a typical Windows system and Linux outperforming a typical RedHat/Mandrake/Debian/Gentoo/SuSE Linux system. Sounds like fun. SO who wants to get this party started? :)
        • by Glonoinha (587375) on Saturday May 07, 2005 @01:26PM (#12462847) Journal
          Just put up two professional servers, fill em with some nudie-pix and post the links to FARK at the same time. That's about the most intense benchmark known to man.
        • by khasim (1285) <brandioch.conner@gmail.com> on Saturday May 07, 2005 @02:12PM (#12463070)
          The "big players" can contribute cash, but not hardware. It is too easy for them to contribute hardware specifically enhanced for their product.

          #1. Each team gets X dollars and no restrictions on what it can buy. After all, that should be how businesses run their shops. We aren't comparing hardware, but total systems.

          #2. Each team must purchase the software off the shelf.

          #3. No team is allowed to recompile anything or to use any drivers, etc not available from a public server for the past 12 months. This might sound like a bad deal for Linux, but it will also stop Microsoft from re-writing the drivers. Again, most companies do not have access to that level of expertise so that won't be allowed.

          #4. Each tweak or configuration setting must be documented and a reference for it shown on a public website or manual. Again, businesses only know what they can read.

          #5. At the end of the competition, the other teams will critique each team's configuration. We've all seen the "tests" where Windows is running on a RAID 0 array which is beyond stupid for real production work.

          That way, each team can deploy the best system they can think of for the test. I'm sure you all remember MindCraft and their massive single server "test" for webservers when anyone else would have run multiple cheaper servers and gotten higher throughput.

          So, a test in run and the Windows team buys the biggest single system they can afford for the money. While the Linux team fields a dozen boxes booting from CD and one storage box.

          Which system would be "better"?

          Which system would be faster? Would that be the same answer under different loads?

          Which system would be easier to maintain?

          Which system would have higher uptime?

          Which system would be easier to scale up?
    • by cperciva (102828) on Saturday May 07, 2005 @02:36AM (#12460546) Homepage
      ...a specific ancient version of Red Hat

      This report was written in April 2003, according to the first page. They used the most recent version of RedHat available to them.

      This report may be two years out of date, but I can't see any signs of bias in its production.
      • by MemoryDragon (544441) on Saturday May 07, 2005 @04:28AM (#12460915)
        No... for that date definitely not, but things have gotten much faster on the linux side since kernel 2.6
      • by jrumney (197329) on Saturday May 07, 2005 @05:21AM (#12461090) Homepage
        This report was written in April 2003, according to the first page

        Strange, they have a press release [lionbridge.com] on their website dated April 6, 2005 about the report being commissioned by Microsoft. Either Microsoft got ripped off by recycling an old report, or one of those dates is wrong.

      • by Haydn Fenton (752330) <no.spam.for.haydn@gmail.com> on Saturday May 07, 2005 @09:39AM (#12461741)
        Slashdot, 7th of May 2005.

        • Linux: Red Hat/Apache Slower Than Windows Server 2003?
          Posted by Zonk on Saturday May 07, @06:20
          from the who-doesn't-love-some-delicious-fud dept.
          phantomfive writes "In a recent test by a company called Veritest, Windows 2003 web server performs up to 300% higher throughput than Red Hat Linux running with Apache. Veritest used webbench to do there testing. Since the test was commisioned by Microsoft, is this just more FUD from a company with a long history? Or are the results valid this time? The study can be found here."


        Slashdot, 11th of May 2005.

        • Microsoft: 2k3 Server vs RedHat\Apache
          Posted by Michael on Wednesday May 11, @09:01
          from the oops-they-did-it-again department.
          fooslashbardot writes "Well, it looks like the suits at Redmond have done it again with the test last week that stated Windows 2003 Server outperforms RedHat\Apache by 300%. We knew the test had been commissioned by Microsoft, and now a recent Wired article has arose which lays claims that Mr. Gates himself was seen slipping the people at Veritest wads of up to 10,000 hundred dollar bills shortly before the announcements were made. Gates has denied all such claims, and says that Balmer smells of Cheese."


        I've never used either, or know anything about Veritest, so I haven't a clue about whether the results are likely to be correct or not. But we all know Microsoft :P
    • by eric76 (679787) on Saturday May 07, 2005 @02:38AM (#12460553)
      Using the same logic, my old '64 International Harvester pickup could be shown to be faster than a Formula 1 race car.

      I have the ideal road for the test in mind.

      Now all I need is for someone to loan me a Formula 1 race car for the test.
    • by rokzy (687636) on Saturday May 07, 2005 @02:38AM (#12460556)
      wrong, it does tell us which is faster - linux. if Windows was faster, why would they need to benchmark against a crippled system?

      sure there's a chance I'm wrong, but for me weighing the CHANCE of better performance from Windows against the CERTAINTY that they have lied about their product (or been completely incompetant) is a no-brainer.

      and that's not considering costs (remember guys, using linux always requires an old, slow mainframe to be factored into the TOC!)
    • by Coryoth (254751) on Saturday May 07, 2005 @02:38AM (#12460558) Homepage Journal
      Under fairer circumstances, who knows, IIS might have still won, but this rigged benchmark has nothing to offer us in deciding which server is faster.

      I've reached the point where I completely ignore all the studies and benchmarks like this, from both sides. It is, quite simply, far too easy to set the constraints and metrics up so as to make sure you come out ahead. What's worse, it has become absolutely standard practice to do so. Studies have become completely useless because you can guarantee that they've been cooked one way or another.

      Jedidiah.
      • by iamacat (583406) on Saturday May 07, 2005 @04:53AM (#12460992)
        Well, I wouldn't use them for purchasing decisions, but this kind of studies are sure useful for pointing out weak spots in your favorite product. Time for Apache hackers get busy and fix such embarrassing performance scenarios.

        I don't think Apache is the right server for static pages and simple CGIs though. It has so many modules and settings that the code path from filesystem to socket has to be much longer than necessary and longer than the feature-limited competition. They should try a simple server like Boa.
      • by zobier (585066) <zobier@z[ ]er.net ['obi' in gap]> on Saturday May 07, 2005 @05:26AM (#12461102)
        What someone should do in these kind of tests is get an expert Windows team and an expert GNU/Linux team, identical servers and let them configure them as best as they can. That seems fair.
        • I like it. (Score:3, Funny)

          by hey! (33014) on Saturday May 07, 2005 @09:56AM (#12461829) Homepage Journal
          Kind of like the ultimate fighting championship -- no holds barred compentition between alpha geeks to see who can webbench more.

          Of course this is completely irrelevant to real world usage scenarios. What we need is another data point from the other end of the spectrum. It can be like one of those reality shows. You rope four teams of ordinary folk right of the street, hand each team identical base (no OS) servers, only each team gets a different operating system :Windows, RHEL, FreeBSD and MacOS (Of course, the Mac group will have to work with the closest apprixmation we can manage to the x86 box going by paper specs). Their task will be to build an ecommerce site and successfully run it without getting hacked for four successive weeks, armed only the documentation provided with the operating system.

          We can call it SURV1V0R.
          • Re:I like it. (Score:5, Interesting)

            by barneyfoo (80862) on Saturday May 07, 2005 @11:09AM (#12462162)
            Actually the ultimate test would be for an independant party to Sponsor a challenge.

            Each would team would get(windows and linux):

            $5,000 in cash with which to buy hardware and software. All purchases must carry a receipt and all parts must run to spec. No overclocking.

            Garunteed 5 9's power.

            Each Team's computer will be housed in the same independant facility maintained by Sponsor.

            The contest can last no longer than a year. Each team will be able to maintain their own server throughout the competition.

            The scoring will be simple. You won't lose points for having down time. Your score is simply the number server pages(the kind to be determined) you've properly served before your first moment of downtime. So if your server crashes before the year is over, the number of pages served up to that point is your score.

            Maybe someone has an idea for what a good server is to run.
            • Re:I like it. (Score:3, Interesting)

              by fireboy1919 (257783) <.rustyp. .at. .freeshell.org.> on Saturday May 07, 2005 @12:32PM (#12462555) Homepage Journal
              No, that's still no good.

              Then you could be dealing with luck. You happen to get a bad batch of RAM and your server crashes? Sucks for you. The other guy wins. Somebody decides to get the other team to win via DDOS? Sucks. Other team wins. Random lightening strike? You see the problem?

              Plus it makes stability the ultimate concern rather than (possibly) throughput, which is clearly a benchmark in favor of Linux, since the OS itself is simply better designed (if for no other reason than because they replace the worn-out parts more often). If you go down for a minute every day, but only for a minute, will anyone care?

              Most likely not. Incidentally, thats about the length of time it takes for me to restart my apache install. Heck, I could run apache with xinetd without too much problem, which to me is kind of cheating.

              A better idea would be to separate these into two separate scores: one for uptime characteristics (including recovery time), and one for throughput.
        • by ChrisCampbell47 (181542) on Saturday May 07, 2005 @11:51AM (#12462343)
          A real competition would give 2 (or more) teams identical budgets, not identical hardware. They would need to PAY all consultants at market rates. No free consultation from Microsoft (or Red Hat) R+D allowed, unless they are paid, and paid at market rates.

          The budget has to buy software, hardware and setup labor.

          This eliminates the problem of "that hardware favors Microsoft" or "that team had better engineers". It all comes down to money and value.

          Of course the competition would need to state up front exactly how performance would be measured and how the various different tasks (static pages, cgi, etc.) would be weighted to come up with any overall scores. That would dictate the design choices made by each team.

    • by dtfinch (661405) * on Saturday May 07, 2005 @02:42AM (#12460576) Journal
      "we applied no additional patches and made no additional modifications to the Red Hat Linux Advanced Server 2.1 distribution used for these tests"

      I remember installing CentOS-3, based on RHEL3, on a server and having terribly slow disk performance with my raid adaptor. Running "yum update" to get the current patches yielded about a 10x speedup. Yet the Windows server gets a dozen or so undocumented registry tweaks.

      In the SSL comparison, they're using the fastest (though slightly less secure) choice of encryption algorithms in IIS and the slowest in Apache. They're comparing RC4+MD5 to 3DES+SHA1.

      And they decided to include ISAPI in the benchmarks without including the apache equivalent. All they test in apache is CGI. So again it's IIS's fastest option versus Apache's slowest option.

      • by spuzzzzzzz (807185) on Saturday May 07, 2005 @03:27AM (#12460758) Homepage
        In the past I have seen people post blatantly false things which get accepted as true just because the mods are too lazy to check. So I thought I'd chime in here with links to some evidence to back up parent.

        1) The algorithms used in SSL are listed on page 33 of the pdf linked to. Both linux setups use 3DES+SHA1 and windows uses RC4+MD5 (as parent said).

        2) This [hn.edu.cn] page (found via google) has a table comparing ciphers about 2/3 of the way down. RC4 appears to be about 2-3 times faster than 3DES.

        3) This [ottawa.on.ca] email contains a comparison between MD5 and SHA1. MD5 appears to be 2.5 - 5 times faster than SHA1.
        • by Anonymous Coward on Saturday May 07, 2005 @04:33AM (#12460930)

          Speaking as someone who has quite some experience in cryptographic algorithms, I back up parent and grand parent. The benchmark is completely biased in that Veritest really ends up comparing 3DES+SHA1 with RC4+MD5. This unacceptable, I invite slashdoters to complain to Veritest:

          Veritest
          1001 Aviation Parkway, Suite 400
          Morrisville, NC 27560
          Tel 919-380-2800
          Fax 919-380-2899
          E-Mail: info@veritest.com
      • by Jacco de Leeuw (4646) on Saturday May 07, 2005 @01:40PM (#12462909) Homepage
        In the SSL comparison, they're using the fastest (though slightly less secure) choice of encryption algorithms in IIS and the slowest in Apache. They're comparing RC4+MD5 to 3DES+SHA1.

        I found another flaw on that same page.

        VeriTest also write that Windows 2003 was using RSA key exchange and Red Hat was using Diffie-Hellman (DH).

        But DH [wikipedia.org] is vulnerable to a Man-in-the-Middle attack so SSL uses RSA to perform the authentication.

        So Red Hat is doing RSA and DH, whereas Windows is doing only RSA!

        Using OpenSSL's ssltest program I noticed that DH+RSA was 50% slower than RSA:

        $ time ./ssltest -num 1000 -tls1 -cert server.pem -key server.key -c_cert client.pem -c_key client.key -cipher "RC4-MD5:@STRENGTH" -client_auth -server_auth -CAfile cacert.pem
        $ time ./ssltest -num 1000 -tls1 -cert server.pem -key server.key -c_cert client.pem -c_key client.key -cipher "EDH-RSA-DES-CBC3-SHA:@STRENGTH" -client_auth -server_auth -CAfile cacert.pem

        And I would not be surprised if Windows 2003 was using SSLv2 (faster and insecure) while Linux was using TLS1! Because that is another parameter that VeriTest is not disclosing.

    • by darkain (749283) on Saturday May 07, 2005 @03:30AM (#12460766) Homepage
      I personally run a windows based server (yes, hate me if you will, but i need some windows only tools at the moment). I used IIS for about 3 to 4 years, until I started to get heavy into PHP development, running a source control system, and game hosting. I switched from IIS to Apache because it had better support for virtualizing directories based off of conditions in easy to setup script files, which made it easy for me to run the UT2004 server, plus mod download server on the same box. This turned out to be a big hit at lan-parties, since the server had all of the packages, and would share directly from the server folders (but restricted the server's config files from anon access). I later switched to SVN for storing my programming projects, and its integration with Apache is great.

      I am a microsft OS user by nature. I switched to using Apache on my Windows server because of features it lacked, and now I'm never turning back.

      "I am Darkain... and I'm a coder"
    • by julesh (229690) on Saturday May 07, 2005 @04:57AM (#12461005)
      I've actually seen this ridiculously unfair test before. The main thing that is wrong with it is that last access time gathering is switched off on the windows set up but not the Linux one. For web serving, which typically relies on large numbers of accesses to small files, last access time recording is a _severe_ performance drag.
    • by cofaboy (718205) on Saturday May 07, 2005 @06:17AM (#12461220)
      Don't forget that MS changed the EULA so that you are no longer allowed to benchmark windows and MS products without written permission. The only commercial people who can benchmark are those who will use a framework defined by MS.

      Any other options will mean no study and no money.

      He who pays the piper calls the tune.

  • by PsychicX (866028) on Saturday May 07, 2005 @02:22AM (#12460472)
    10%? 15%? Those are numbers I'd believe. But THREE HUNDRED PERCENT? I like Microsoft, and I like when somebody defends them. But this is just bull.
    • by superpulpsicle (533373) on Saturday May 07, 2005 @02:31AM (#12460519)
      I am not even sure you can get 300 percent difference racing a 486 PC and a 1Gz PC in any test.

    • by Anonymous Coward on Saturday May 07, 2005 @03:06AM (#12460681)
      I like Microsoft, and I like when somebody defends them.

      I've been in IT for about 17 years. I've seen MS destroy "the little guy" time and time again, with thier power and yet with all that power, money and developer base, deliver garbage year after year, to this day.

      Then I compare them with offerings like Mac OS X, the BSD's and Linux and wonder, how on Earth someone can say, "I like Microsoft".

      Seriously now, what is there to like about them?
      • by menkhaura (103150) <espinafre@gmail.com> on Saturday May 07, 2005 @04:01AM (#12460855) Homepage
        Four words:

        Well paid microsoft employee.
  • Easy (Score:5, Informative)

    by green pizza (159161) on Saturday May 07, 2005 @02:22AM (#12460475) Homepage
    Out of the box Apache doesn't do too well. But take some time tuning it, and your OS's TCP/IP stack, and you can easily outperform even Zeus. Read some of the tuning guides.
  • Let's see. A test commissioned by Microsoft says IIS is faster than Apache. The link for more information goes to microsoft.com. Is this really "news"? Seems more like a thinly-disguised press release...
    • Seems more like a thinly-disguised press release...

      s/press release/troll
    • by august sun (799030) on Saturday May 07, 2005 @02:38AM (#12460554)
      Can we please for once be mature about it and look at their methodology objectively? I'll even grant that because it was commisioned by MS a little extra scrutiny is certainly due; but summarily discarding the study simply for this reason is the intellectual equivalent of sticking our fingers in our ears and screaming "lalalalalala" at the top of our lungs.
      • by cranos (592602) on Saturday May 07, 2005 @03:06AM (#12460674) Homepage Journal
        Um okay I did the "mature" thing and checked out the report. The report is two years old and compares an RC version of w2k3/IIS6 against an old version of Redhat AS/Apache thus rendering it completely useless for doing an evaluation today. Not only that but it neglects to compare against other linux distributions such as SUSE or Mandrake thus rendering the "Windows better than Linux" claims deceptive at best.

      • by jmv (93421) on Saturday May 07, 2005 @03:18AM (#12460734) Homepage
        The thing with benchmarks is that when they're made by an organisation you can trust, you don't really have to dig the details (and there are always some details you won't see). If I have to dig through everything, I might as well do the benchmark myself! Now, looking at a benchmark sponsored by Microsoft is like reading a study on climate written by an oil company, a study on health by a tobacco company... or even a Linux-Windows benchmark done by RedHat (although I trust RH a bit more than MS).

        The only benchmark by MS which I might trust is one saying Windows is slower and/or worse than Linux. Somehow, I never saw any of those.
      • objectively? (Score:3, Interesting)

        by Infonaut (96956) <infonaut@gmail.com> on Saturday May 07, 2005 @03:40AM (#12460791) Homepage Journal
        I'll even grant that because it was commisioned by MS a little extra scrutiny is certainly due; but summarily discarding the study simply for this reason is the intellectual equivalent of sticking our fingers in our ears and screaming "lalalalalala" at the top of our lungs.

        Actually, it's learned behavior. We've seen so many fact-warping MS-sponsored studies, astroturfing campaigns, dissembling regarding the nature of their monopoly, and other aggressive PR that it's no wonder people are more than a little skeptical.

        This reminds me of something someone told me about graphic card benchmarks. He is a 3d graphics professional, and he was called in by a rather large chip company to help them in a test against another large chip maker's video card. The arrangement was that he would work with the representative from the other company to come up with a "fair" set tests to which both sides could agree.

        As the more experienced guy, he was able to get his counterpart to agree to tests that worked squarely in favor of his company's card. This is in a scenario where it is supposed to be evenhanded, since both companies agreed to the test methodology.

        So it's bad enough already. Compare a situation like that to one in which Microsoft is commissioning a study, and you can imagine why people react with such profound skepticism.

      • They are looking at the methodology objectively and have in the past. The deal is that MS keeps rolling out this same study, using the same methodology, and it isn't true.

        a) they use a slower kind of encryption on the apache side, which makes apache seem slower.

        b) they use a 2003 version of Red Hat with a 2.6 kernal whereas Linux is now up to a newer version.

        c) they make other tuning decisions for the RH they do use in order to slow it down, and to speed Microsoft up.

        In short, the test is rigged so that MS wins and Linux loses. It is that simple.
  • *ahem* (Score:2, Informative)

    by SynapseLapse (644398) on Saturday May 07, 2005 @02:24AM (#12460485)
    Veritest used webbench to do their testing.
  • by Evro (18923) * <evandhoffman@g[ ]l.com ['mai' in gap]> on Saturday May 07, 2005 @02:26AM (#12460495) Homepage Journal
    Microsoft Windows Server 2003 vs. Linux
    Competitive File Server Performance
    Comparison


    Test report prepared under contract from Microsoft

    Executive summary
    Microsoft commissioned VeriTest, a
    division of Lionbridge Technologies,
    Inc., to conduct a series of tests
    comparing the File serving
    performance of the following server
    operating system configurations
    running on a variety of server
    hardware and processor
    configurations:


    At least they're up-front about it these days.

    Other Veritest-Microsoft fun:

    http://www.veritest.com/clients/reports/microsoft/ [veritest.com]
    http://www.microsoft.com/windowsserversystem/facts /analyses/default.mspx [microsoft.com]
    http://www.gotdotnet.com/team/compare/veritest.asp x [gotdotnet.com] - .NET versus Java

    In short, this is a company paid by Microsoft to make reports/whitepapers that make Microsoft look good. Nothing wrong with that as long as everyone's aware

  • by bloodbob (584601) on Saturday May 07, 2005 @02:27AM (#12460498)
    Notice the total lack of the CGI script?
  • I run both at work (Score:2, Interesting)

    by Anonymous Coward on Saturday May 07, 2005 @02:27AM (#12460500)
    And the results are interesting. The Gentoo server doesn't perform nearly as fast as the Windows Server for most basic serving tasks. But software like Exchange Server is so badly written that it's much slower than postfix.

    It's sad. If the same people writing 2k3 were writing products like Exchange, we wouldn't have a need for the Linux server.
  • by Bug-Y2K (126658) on Saturday May 07, 2005 @02:28AM (#12460501) Homepage
    Faster to get infected.
    Faster to get rooted.
    Faster to get used as a warez server.

    Nothing new here.
    • by vcv (526771) on Saturday May 07, 2005 @02:33AM (#12460531)
      I assume you've never used IIS 6.0 which has been out for 2 years. Very very secure, easily arguable moreso than apache.

      But why would you believe that? I mean it's not like it's easy to find out..
      • by team99parody (880782) on Saturday May 07, 2005 @03:04AM (#12460668) Homepage
        "I assume you've never used IIS 6.0 .... Very very secure, easily arguable moreso than apache."

        You're shooting for a Funny mod, right? The biggest "advancement" in IIS 6 is that instead of IIS 5.X that that ran 100% in user-mode, IIS 6.X runs as a kernel module [certcities.com]

        With IIS 6, everything changes. To start with, there's a new piece of kernel mode software: Http.sys. This driver, written by Microsoft, is responsible for receiving all IIS-bound TCP/IP traffic from the TCP/IP stack. Running in kernel mode gives the new driver a huge speed advantage
        Which is a cute trick for gaining performance at the expense of security (kinda like the various Linux kernel-web-servers like khttpd).

        "But why would you believe that? I mean it's not like it's easy to find out.."

        Indeed you are correct that it's not easy to find out. Leading security sites all report that it is NOT more secure as you allege. For example, the current rating of IIS 6report from Secunia, (one of the top couple security companies [slashdot.org] as opposed to merely your anecdotal rumor:

        "Microsoft Internet Information Services (IIS) 6 with all vendor patches installed and all vendor workarounds applied, is currently affected by one or more Secunia advisories rated Moderately critical
        "
        In contrast, Apache 2.X has the much better rating: "Apache 2.0.x with all vendor patches installed and all vendor workarounds applied, is currently affected by one or more Secunia advisories rated Less critical"
    • by team99parody (880782) on Saturday May 07, 2005 @02:41AM (#12460572) Homepage
      And remember, that the TC0 (0 for 0wnersh1p) [immunitysec.com] is lower for Windows as well (""Immunity's findings clearly show that the best platform for your targets to be running is Microsoft Windows, allowing YOU unparalleled value for THEIR dollar."). For anyone who missed it, /. [slashdot.org] had a lot of great discussion on that one from people who couldn't detect a troll.
  • Fair testing... (Score:2, Informative)

    by Anonymous Coward on Saturday May 07, 2005 @02:28AM (#12460505)
    Reminds me of this editorial on the G5's testing by Veritest. http://spl.haxial.net/apple-powermac-G5/ [haxial.net]
  • by ricky-road-flats (770129) on Saturday May 07, 2005 @02:30AM (#12460512)
    So does that make SMS on Windows faster than morse code on Linux?
  • by guaigean (867316) on Saturday May 07, 2005 @02:30AM (#12460513)
    I wonder if Bill Gates actually believes his own bullshit...
    • Re:One question... (Score:2, Insightful)

      by Nos. (179609) <andrew@t[ ]errs.ca ['hek' in gap]> on Saturday May 07, 2005 @02:37AM (#12460549) Homepage
      Of course not, but that's not the point. Typically the guys who make the money decisions will read the headlines, and glaze over the rest, probably missing details like this study was paid for by Microsoft. Ever have to hand in a project proposal or such? I've done many, rarely does anything other than the Executive Summary get read. The rest is just there to make the document look good. Microsoft has a very big marketing department. They know this kind of stuff. Do you really think Microsoft would pay for these "studies" if they didn't show a positive return on investment?
  • Not surprising (Score:2, Interesting)

    by hoka (880785) on Saturday May 07, 2005 @02:30AM (#12460514)
    If they were running heavily restricted SELinux on RedHat it wouldn't be surprising to witness a massive slowdown on certain applications, and will likely be infinitely more secure than a Windows box probably could ever be. Beyond that Apache can be very slow out of the box, on my hardened gentoo test system (please withhold funroll loops jokes) Apache2 with hardened PHP + MySQL I would be lucky to handle 2 requests a second happily, it was amazingly slow. I've yet to fully tune it but some even basic tuning was able to improve speeds dramatically. It wouldn't surprise me if similar techniques were used by this "benchmark".
  • by Umbral Blot (737704) on Saturday May 07, 2005 @02:31AM (#12460517) Homepage
    What possibly possessed them to publish these results. No one in their right mind is going to believe 300% is an accurate figure under fair testing conditions.
  • by PaulQuinn (171592) on Saturday May 07, 2005 @02:33AM (#12460530)
    Why couldn't IIS be faster than Apache?
    Is Apache/Linux the "end-all-be-all, there is nothing that can be better so let's stop trying" type of quality?
    Are the guys who work at Microsoft a bunch of idiots that anyone can out-program?

    I'm sure IIS is better at some things, maybe more things, maybe less.

    Who cares! I don't think stats like these are why anyone chooses Apache/Linux over IIS/Windows.
    • by HairyCanary (688865) on Saturday May 07, 2005 @02:41AM (#12460571)
      It very well could be. However, let's try 1) an indepedent test, paid for by neither competitor, and 2) the most recent version of IIS against the most recent version of Apache, and 3) the most recent version of Windows against the most recent version of Linux. I can guarantee a win in any test so long as I am allowed to dictate all of the conditions. I wonder how many combinations they tried before they found one that IIS6 could beat?
    • by ArbitraryConstant (763964) on Saturday May 07, 2005 @03:10AM (#12460699) Homepage
      300% is pretty hard to believe.

      Apache was never optimized for serving lots of small, static files so I can easily believe it falling behind in some benchmarks, but not 300%.

      It doesn't take much computer to saturate a lot of bandwidth, which is why most people don't care, but big sites will often have a Zeus (or similar) server set up for serving images precisely because Apache isn't as good for that. But you've got to be huge before you get to that point.

      Dynamic content put Apache where it is. It has the support, the tools, the libraries, and the widespread expertise to do dynamic content pretty damn well. It's not better than everyone at everything there either, but it's a very good solution for most cases.
    • by julesh (229690) on Saturday May 07, 2005 @05:08AM (#12461054)
      Why couldn't IIS be faster than Apache?

      It could be. However, this test is severely flawed in that they performed registry level optimisations to the Windows setup, yet equivalent optimisations that are well documented for Linux were not performed. Therefore, we don't know.

    • by Arimus (198136) on Saturday May 07, 2005 @05:10AM (#12461061)
      We'll be reasonable when companies carrying out these kind of performance sutdies compare apples with apples (and ideally with no hint of GM involved):

      1. Use identical hardware...
      2. Use the default un-optimized settings...
      3. Hand tune using experts on the software under test...
      4. Rerun the identical tests...
      5. Ensure that clients used to test server software are identically configured.

      That would be being reasonable...

  • by big_groo (237634) <groovis&gmail,com> on Saturday May 07, 2005 @02:35AM (#12460538) Homepage
    Hey slashtwats...when the 'study' is found at MIT or Berkely, or Waterloo or that would be unbiased (or has some semblance of credibility for that matter)...wake me up, m'kay?
  • I'll test the amazing Linux versus the ultra-slow windows NT.

    Config:
    Linux: Latest Redhat running on Opteron 4GHz
    Windows: Windows 3.1 running on a Pentium 100.

    And the winner is...?[/sarcasm]
  • by FidelCatsro (861135) <fidelcatsroNO@SPAMgmail.com> on Saturday May 07, 2005 @02:40AM (#12460564) Journal
    Windows 2003 server running on skynet is 300% Faster than Ye-oldie redhat -12 edition from 1723 running on an abacus.
    This reliable Expensive test paid for by Microsoft to show how much better windows 2003 server is(the payment came with a clause stating such).
  • by tempest303 (259600) <jensknutson@ y a h o o .com> on Saturday May 07, 2005 @02:40AM (#12460568) Homepage
    Right at the top of the PDF it says "April 2003". How is this benchmark "news"? (And nevermind the fact that as always, as an MS sponsered benchmark, the MS machine was probably hand-tuned, and RH + Apache was probably run in a stock configuration.)

    While sheer performance isn't really what sells RHEL boxes, I'd be very interested to see a proper test of Win2k3 vs RHEL 4 on identical hardware...
  • This is new? (Score:5, Informative)

    by louarnkoz (805588) on Saturday May 07, 2005 @02:43AM (#12460581)
    The web page says it was published May 5, 2004, i.e. a year ago. The report itself is dated from April 2003. The test was done using RH advanced server 2 and Windows 2003 RC2, i.e. a pre-release version. Since then, both RH and Microsoft have published new releases, for example the service pack 1 of Windows 2003. Why is this posted now?
  • by Percy_Blakeney (542178) on Saturday May 07, 2005 @02:45AM (#12460589) Homepage
    Not only does the linked page say it was published in mid-2004, but the study itself is from early 2003. How does this qualify as a 'recent' study? Just because someone read it for the first time today doesn't mean it was created today...

    Sheesh -- with such outdated news, I almost felt like I was reading the newspaper or something.

  • What about Norton? (Score:3, Insightful)

    by qualico (731143) <worldcouchsurfer.gmail@com> on Saturday May 07, 2005 @02:51AM (#12460615) Journal
    Wonder what that benchmark would be if you installed the FULL Norton package on it?

    This bull reminds me of those advertisements for weight loss.

    BEFORE................AFTER
    Stick stomach out....Suck stomach in
    White......................Tanned
    No cosmetics..........New facial
    Front shot...............Side shot
    Grubby clothes........New fashions

  • by pg110404 (836120) on Saturday May 07, 2005 @03:15AM (#12460717)
    I've been around on the net for a while now and if there is one thing I can say that is universal it's that servers that implement ASP are generally more flaky than other types of servers.

    I use tvlistings2.zap2it.com which has ASP, and while I think they've gotten far better in the recent past, even 4 or 5 months ago, it would routinely lose my channel line up and if I'd try to log in to reset the cookie it would claim my login account doesn't exist. I'd follow their suggestion and try recreating the account and it said it was already in use. But I can't log in because it doesn't exist, but I can't recreate it because it already exists, but I can't log in because it doesn't exist.......

    Anyway, I notice time and time again how sites that churn out ASP pages have typically slower response times compared to ones that have PHP or straight static HTML. For anyone who wonders how I determine that, I go to load a web page, and I wait for it to load. If it starts taking a while and I mean a really long while, I look at the URL and more often than not, I'll see it has a reference to an ASP. Maybe the "oh it's another one of those stupid IIS servers" makes it stick out in my mind more than "wait, this one is slow. I don't really know what's running it but it's crap", but if I had to put money on it, I'd say the IIS servers are generally slower.

    I don't run a web server, I could, but I don't. Managing web servers would not be a job I'd want to do. Almost all of my web server experience is on the visitor side and without any kind of overtly blatant bias from any sources (like the kind of "windows crashes therefore windows is evil and anything dealing with windows is also evil") to affect my opinion, I'd have to say that I personally experience a more significant lack of performance and reliability visiting web sites that run IIS than other sites that don't appear to run it. So to me, a report like this is microsoft's ever so polite way of trying to stick an uncomfortably large tube up my ass and then proceeding to blow smoke through the opening.
  • by grcumb (781340) on Saturday May 07, 2005 @03:17AM (#12460730) Homepage Journal

    People keep saying, 'When are we going to get a real benchmark?" Well, why don't we roll our own? Seriously.

    Here's my idea:

    Slashdot has strong zealot^H^H^H^H^H^Hsupporters for both Microsoft and Linux. Let's have a contest to select the best qualified from each side, have them work in teams on identical hardware. Let them make any changes, tweaks or optimisations they can dream up. Then, let 'em rip.

    I'm dead serious about this, by the way. Let's get off this endless roundabout and for once make a clear comparison.

    For bonus points, once the first contest is finished, we should take the two servers, leave them exposed to the Internet and see which one gets 0wned first. 8^)

  • by Trejkaz (615352) on Saturday May 07, 2005 @03:31AM (#12460769) Homepage

    Microsoft argue that Apache is slower because CGI is slower. They say that it needs to spawn a new process for each request, which is correct.

    But how many years have mod_perl and mod_php been around now? Does anyone actually use CGI on Apache this decade?

    Perhaps a more fair comparison would have compared CGI on IIS with CGI on Apache. And I'm pretty sure that for various reasons (spawning processes is slower on Win32 than on Linux) IIS would lose horribly.

  • by Fefe (6964) on Saturday May 07, 2005 @03:55AM (#12460841) Homepage
    It's ridiculous how the Slashdot crowd is falling victim to Pavlov again.

    If someone publishes a benchmark about your software, and finds out your software does not perform well, don't whine, don't behave like a child, don't start kicking and screaming, don't tear his hair out. Behave professionally.

    Good starting points:

    • Does their test setup matter?
    • Can their number possibly be true?
    • What weak spots about the competition does their test reveal?
    • What can we do to improve the results?


    Let me summarize what I think about their test. First of all, I believe their numbers. Apache sucks performance-wise, in particular if you run a busy site with dynamic content. That's why people are using squid in local accelerator mode before Apache. This is a good indication that some performance tuning is in order. But no, people rather wait for Microsoft to find out and then they start thinking about fixing it.

    If this test was meant to be unfair FUD, they would not have tested TUX, just Apache.

    But now to my questions above:

    Question 1: is their setup relevant?

    No. Sites who answer more than 5000 requests per second are not using a single web server, they are using a load balancer and a cluster.

    Question 2: Can their numbers possibly be true?

    The point I find least believable is that IIS had better CGI performance than Apache. Creating a process is really slow on Windows. Their result should be independently verified.

    Question 3: What weak spots about the competition does their test reveal?

    They did not test a single-CPU webserver (which is what almost everyone is using).

    They did not test FastCGI or APAPI dynamic web pages.

    So if we wanted to do a more balanced review, we would look at these.

    Question 4: What can we do to improve the results.

    Document APAPI better, I'd say. Almost nobody is writing their dynamic web page modules with APAPI.
    Everyone is using PHP or mod_perl. Benchmark Apache in real-world scenarios. Document best practices.
    • by NickFortune (613926) on Saturday May 07, 2005 @10:58AM (#12462111) Homepage Journal
      Does the test setup matter? Apparently Veritest thought so. They spent some time tweaking both machines. It seems like the tweaks sped up windows and slowed down linux.

      I have to applaud the way you take a positive stance and look at how apache can be improved. I expect efforts in that direction form an ongoing part of apache development, but the positive attitude is appreciated. It's just a bit sad that your post reads as an endorsemnt of a blatant piece of paid-for propaganda

  • by j1m+5n0w (749199) on Saturday May 07, 2005 @04:46AM (#12460975) Homepage Journal
    Googling around, I found these benchmarks [litespeedtech.com] published by litespeed (who apparently put out their own web server, which (big surprise) they found to beat most of the competitors in most of the tests). Interesting numbers. According to their results, apache really is slow. IIS did a bit better. TUX was extremely fast serving small static files. In one test, they have apache 2.0.52 serving 4673 files per second, compared to 33025 for IIS 6.0 and 53304 per second for TUX.

    I don't know if these numbers are trustworthy, but at least its another datapoint.

  • The red flag (Score:3, Insightful)

    by jav1231 (539129) on Saturday May 07, 2005 @08:03AM (#12461466)
    The red flag here is "300%." I don't think anyone can take it seriously with such a large desparity. That's like two hybrid car makers competing and the salesman tells a customer "Yeah, they get 50mpg but ours gets a bagillion-zillion!"
  • by wowbagger (69688) on Saturday May 07, 2005 @10:41AM (#12462035) Homepage Journal
    There is only one way you can get a "fair" test in a situation like this:

    1. Let Microsoft come up with a set of tests to be applied.
    2. Let RedHat come up with a set of tests to be applied.
    3. Compute the union of the two sets of tests.
    4. Let Microsoft specify the target cost of the hardware they want to benchmark on (C1).
    5. Let Redhat specify the target cost of the hardware they want to benchmark on (C2).
    6. Take the geometric mean of the two hardware costs (C=sqrt(C1*C2))
    7. Given C, let Microsoft determine the hardware to be benchmarked on, given the assumption that the purchaser of the hardware will be a third party buying from standard sources (e.g. NewEgg, Dell, IBM, whatever - but not eBay or the like - new hardware only).
    8. Again, given C, let RedHat determine the hardware to be purchased - new hardware only from recognized sources.
    9. Third party buys both sets of hardware and delivers it to RedHat and Microsoft.
    10. Microsoft provides detailed setup and configuration instructions for the test. Microsoft may have access to the hardware for the purposes of determining these settings. Setups are NOT allowed to use non-publicly available code (i.e. applying released service packs is OK, applying custom service packs is NOT allowed).
    11. RedHat provides detailed setup and configuration instructions. RedHat may have access to the hardware for the purposes of determining these settings. Setups are NOT allowed to use non-publicly available code (i.e. released updates via up2date are allowed, but custom code is NOT allowed.)
    12. Both sides return the hardware to the third party, who then verifies the hardware had not been modified (alternatively, the third party purchases 2 sets of hardware for each side, keeping one set.)
    13. Third party performs the installs as per the instructions for both sides.
    14. Third party performs the tests.
    15. Test results, hardware spec, and setup instructions are posted.


    This way, each side may tweak their setup to the max, using all specialized knowledge, to get maximum performance. Since each side may run the optimal hardware configuration (given price restrictions), the practice of hobbling the other side by picking ill-supported hardware is prevented.

    This test best conforms to the sort of thing an end user would do - pick the best bang for the buck for the budget and task at hand.

    Now, this might result in a dual Itanium server (Windows) being benchmarked against a dual Power server (Linux) (or some other comparison), but that is "fair" in that both sides are running on the same COST hardware.

    True, each side might "release" a new (service pack|set of RPMs) for the purposes of the test, but as long as those releases are publicly available, who cares? We all benefit from the improvement of the code.

The 11 is for people with the pride of a 10 and the pocketbook of an 8. -- R.B. Greenberg [referring to PDPs?]

Working...