Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Microsoft Software Linux Apache

Red Hat/Apache Slower Than Windows Server 2003? 628

phantomfive writes "In a recent test by a company called Veritest, Windows 2003 web server performs up to 300% higher throughput than Red Hat Linux running with Apache. Veritest used webbench to do there testing. Since the test was commisioned by Microsoft, is this just more FUD from a company with a long history? Or are the results valid this time? The study can be found here."
This discussion has been archived. No new comments can be posted.

Red Hat/Apache Slower Than Windows Server 2003?

Comments Filter:
  • I run both at work (Score:2, Interesting)

    by Anonymous Coward on Saturday May 07, 2005 @02:27AM (#12460500)
    And the results are interesting. The Gentoo server doesn't perform nearly as fast as the Windows Server for most basic serving tasks. But software like Exchange Server is so badly written that it's much slower than postfix.

    It's sad. If the same people writing 2k3 were writing products like Exchange, we wouldn't have a need for the Linux server.
  • Re:Easy (Score:1, Interesting)

    by Anonymous Coward on Saturday May 07, 2005 @02:28AM (#12460507)
    Does IIS have to be tweaked like this?
  • Not surprising (Score:2, Interesting)

    by hoka ( 880785 ) on Saturday May 07, 2005 @02:30AM (#12460514)
    If they were running heavily restricted SELinux on RedHat it wouldn't be surprising to witness a massive slowdown on certain applications, and will likely be infinitely more secure than a Windows box probably could ever be. Beyond that Apache can be very slow out of the box, on my hardened gentoo test system (please withhold funroll loops jokes) Apache2 with hardened PHP + MySQL I would be lucky to handle 2 requests a second happily, it was amazingly slow. I've yet to fully tune it but some even basic tuning was able to improve speeds dramatically. It wouldn't surprise me if similar techniques were used by this "benchmark".
  • by team99parody ( 880782 ) on Saturday May 07, 2005 @02:41AM (#12460572) Homepage
    And remember, that the TC0 (0 for 0wnersh1p) [immunitysec.com] is lower for Windows as well (""Immunity's findings clearly show that the best platform for your targets to be running is Microsoft Windows, allowing YOU unparalleled value for THEIR dollar."). For anyone who missed it, /. [slashdot.org] had a lot of great discussion on that one from people who couldn't detect a troll.
  • by Anonymous Coward on Saturday May 07, 2005 @02:48AM (#12460606)
    The 300% faster figure was from a static file test. Since IIS 6 can serve static content from kernel mode, it can go much faster than Apache. TUX can also serve content from kernel mode, so IIS was only 160% faster than it with 8 CPUs. TUX didn't scale (4 CPUs was faster than 8), as IIS was only 12% faster with 1 CPU.

    Keep in mind this report is from 2 years ago.

    dom
  • by NekoXP ( 67564 ) on Saturday May 07, 2005 @02:56AM (#12460636) Homepage

    2003 has kernel-level webserver acceleration and offloads a lot of the processing
    there, the same was as the Tux webserver (also RedHat?) beat the shit out of
    Apache. It's essentially zero-copy-networking with zero-copy-webserving too.

    http://www1.us.dell.com/content/topics/global.aspx /power/en/ps1q01_redhat?c=us&cs=555&l=en&s=biz [dell.com]

    There may be some truth in it, therefore. Aren't there some patches these days to
    hook Apache directly into the Linux kernel too, since Tux is obselete? I doubt
    they ship with RedHat's stock system though even if they exist.

  • by thaig ( 415462 ) on Saturday May 07, 2005 @03:13AM (#12460711) Homepage
    Every time you mention a company's name you are giving it free advertising. It seems that for commercial entities it is better to be well known than well liked.

    So why shouldn't people deny that freebie by refusing to use the exact name?

    Regards,

    Tim
  • by pg110404 ( 836120 ) on Saturday May 07, 2005 @03:15AM (#12460717)
    I've been around on the net for a while now and if there is one thing I can say that is universal it's that servers that implement ASP are generally more flaky than other types of servers.

    I use tvlistings2.zap2it.com which has ASP, and while I think they've gotten far better in the recent past, even 4 or 5 months ago, it would routinely lose my channel line up and if I'd try to log in to reset the cookie it would claim my login account doesn't exist. I'd follow their suggestion and try recreating the account and it said it was already in use. But I can't log in because it doesn't exist, but I can't recreate it because it already exists, but I can't log in because it doesn't exist.......

    Anyway, I notice time and time again how sites that churn out ASP pages have typically slower response times compared to ones that have PHP or straight static HTML. For anyone who wonders how I determine that, I go to load a web page, and I wait for it to load. If it starts taking a while and I mean a really long while, I look at the URL and more often than not, I'll see it has a reference to an ASP. Maybe the "oh it's another one of those stupid IIS servers" makes it stick out in my mind more than "wait, this one is slow. I don't really know what's running it but it's crap", but if I had to put money on it, I'd say the IIS servers are generally slower.

    I don't run a web server, I could, but I don't. Managing web servers would not be a job I'd want to do. Almost all of my web server experience is on the visitor side and without any kind of overtly blatant bias from any sources (like the kind of "windows crashes therefore windows is evil and anything dealing with windows is also evil") to affect my opinion, I'd have to say that I personally experience a more significant lack of performance and reliability visiting web sites that run IIS than other sites that don't appear to run it. So to me, a report like this is microsoft's ever so polite way of trying to stick an uncomfortably large tube up my ass and then proceeding to blow smoke through the opening.
  • by team99parody ( 880782 ) on Saturday May 07, 2005 @03:15AM (#12460722) Homepage
    Parent wrote: Google should switch then.

    Anyone do the math to see what that would cost.

    It's conventional wisdom that Google has about 100,000 servers. If google went with Windows 2003 Server Enterprise Edition (which costs $3999 [microsoft.com]) That would cost google about half a billion dollars.

    Extending the logic to use SQL Server Enterprise Edition as their search database, at $25000/server the price would go up to about $2.5 Billion.

    Every CEO likes to be like Google and likes talking about numbers like billions of dollars; so this is a fun set of numbers to throw around when your're discussing microsoft partnerships with the CEO.

    (Note, however, that in the true spirit of Team99, I must say that Longhorn will make it well worth the price, though, and I wouldn't be surprised to see Google switch)

  • by Mad Merlin ( 837387 ) on Saturday May 07, 2005 @03:30AM (#12460765) Homepage
    Um, let's say you have a 200MHz 486... 300% better is 600Mhz. Shock! I found one!

    And you let me know when you find a 200 Mhz 486, ok?

  • by darkain ( 749283 ) on Saturday May 07, 2005 @03:30AM (#12460766) Homepage
    I personally run a windows based server (yes, hate me if you will, but i need some windows only tools at the moment). I used IIS for about 3 to 4 years, until I started to get heavy into PHP development, running a source control system, and game hosting. I switched from IIS to Apache because it had better support for virtualizing directories based off of conditions in easy to setup script files, which made it easy for me to run the UT2004 server, plus mod download server on the same box. This turned out to be a big hit at lan-parties, since the server had all of the packages, and would share directly from the server folders (but restricted the server's config files from anon access). I later switched to SVN for storing my programming projects, and its integration with Apache is great.

    I am a microsft OS user by nature. I switched to using Apache on my Windows server because of features it lacked, and now I'm never turning back.

    "I am Darkain... and I'm a coder"
  • by Trejkaz ( 615352 ) on Saturday May 07, 2005 @03:31AM (#12460769) Homepage

    Microsoft argue that Apache is slower because CGI is slower. They say that it needs to spawn a new process for each request, which is correct.

    But how many years have mod_perl and mod_php been around now? Does anyone actually use CGI on Apache this decade?

    Perhaps a more fair comparison would have compared CGI on IIS with CGI on Apache. And I'm pretty sure that for various reasons (spawning processes is slower on Win32 than on Linux) IIS would lose horribly.

  • objectively? (Score:3, Interesting)

    by Infonaut ( 96956 ) <infonaut@gmail.com> on Saturday May 07, 2005 @03:40AM (#12460791) Homepage Journal
    I'll even grant that because it was commisioned by MS a little extra scrutiny is certainly due; but summarily discarding the study simply for this reason is the intellectual equivalent of sticking our fingers in our ears and screaming "lalalalalala" at the top of our lungs.

    Actually, it's learned behavior. We've seen so many fact-warping MS-sponsored studies, astroturfing campaigns, dissembling regarding the nature of their monopoly, and other aggressive PR that it's no wonder people are more than a little skeptical.

    This reminds me of something someone told me about graphic card benchmarks. He is a 3d graphics professional, and he was called in by a rather large chip company to help them in a test against another large chip maker's video card. The arrangement was that he would work with the representative from the other company to come up with a "fair" set tests to which both sides could agree.

    As the more experienced guy, he was able to get his counterpart to agree to tests that worked squarely in favor of his company's card. This is in a scenario where it is supposed to be evenhanded, since both companies agreed to the test methodology.

    So it's bad enough already. Compare a situation like that to one in which Microsoft is commissioning a study, and you can imagine why people react with such profound skepticism.

  • Missing Link (Score:2, Interesting)

    by hritcu ( 871613 ) on Saturday May 07, 2005 @03:59AM (#12460854) Homepage
    http://news.netcraft.com/archives/2005/05/01/may_2 005_web_server_survey.html [netcraft.com]

    Allowed HTML: ... <a> ...
    Can anyone tell me how do I use that?
    <a href="...">...</a> does not work.

  • by j1m+5n0w ( 749199 ) on Saturday May 07, 2005 @04:46AM (#12460975) Homepage Journal
    Googling around, I found these benchmarks [litespeedtech.com] published by litespeed (who apparently put out their own web server, which (big surprise) they found to beat most of the competitors in most of the tests). Interesting numbers. According to their results, apache really is slow. IIS did a bit better. TUX was extremely fast serving small static files. In one test, they have apache 2.0.52 serving 4673 files per second, compared to 33025 for IIS 6.0 and 53304 per second for TUX.

    I don't know if these numbers are trustworthy, but at least its another datapoint.

  • by julesh ( 229690 ) on Saturday May 07, 2005 @04:57AM (#12461005)
    I've actually seen this ridiculously unfair test before. The main thing that is wrong with it is that last access time gathering is switched off on the windows set up but not the Linux one. For web serving, which typically relies on large numbers of accesses to small files, last access time recording is a _severe_ performance drag.
  • by incabulos ( 55835 ) on Saturday May 07, 2005 @05:11AM (#12461064)
    Why couldn't IIS be faster than Apache?

    No reason. However going by historical benchmark precedents, and with the assumption that open-source applications improve at a faster rate than their proprietry competition, I find the claim to be rather improbable.

    Is Apache/Linux the "end-all-be-all, there is nothing that can be better so let's stop trying" type of quality?

    Nope. Its merely the best we have right now, there is always room for improvement.

    Are the guys who work at Microsoft a bunch of idiots that anyone can out-program?


    No, but they are coders forced to work with antiquated interfaces and inbred development tools in secrecy using a clunky weak OS with a decade of accumulated garbage under the hood. An OS that has evolved due to marketing and legal impervatives ( gosh we had better make IE an essential part of the OS just like we claimed in court! ) rather than technical, performance, or security goals.

    I'm sure IIS is better at some things, maybe more things, maybe less.

    Yeah, it runs .ASP and ActiveX better than Linux/Apache, or any other OS/webserver combination! *Golf clap for IIS*
  • by mattyrobinson69 ( 751521 ) on Saturday May 07, 2005 @05:40AM (#12461136)
    that'd be cool - have a load balancing firewall to pass every other connection to the apache box, see if the iis box sets on fire
  • They are looking at the methodology objectively and have in the past. The deal is that MS keeps rolling out this same study, using the same methodology, and it isn't true.

    a) they use a slower kind of encryption on the apache side, which makes apache seem slower.

    b) they use a 2003 version of Red Hat with a 2.6 kernal whereas Linux is now up to a newer version.

    c) they make other tuning decisions for the RH they do use in order to slow it down, and to speed Microsoft up.

    In short, the test is rigged so that MS wins and Linux loses. It is that simple.
  • by Anonymous Coward on Saturday May 07, 2005 @07:18AM (#12461352)
    Even so, imagine a meeting at a company about to buy some web servers:

    Right person: We should use Apache because its faster, easier to manage and to install, more secure, less costly. Furthermore, our technical department has a long experience with this server.

    Clown (pulls the benchmark out of his pants): Hey! See this benchmark here! It says IIS is 300% faster. What do you have to say?

    Other persons (seeing the pretty pictures and nodding).

    Right person (stunned and mumbling): But, they are comparing ISAPI with CGI, and its 3 years old, I don't think its valid.

    Other persons (looking confused and restless): I want to go home, lets buy Microsoft and get this nonsense done with already.

    So, this totally biased benchmark has served a purpose, by steering yet another clueless customer to IIS.
  • by Haydn Fenton ( 752330 ) <no.spam.for.haydn@gmail.com> on Saturday May 07, 2005 @09:39AM (#12461741)
    Slashdot, 7th of May 2005.

    • Linux: Red Hat/Apache Slower Than Windows Server 2003?
      Posted by Zonk on Saturday May 07, @06:20
      from the who-doesn't-love-some-delicious-fud dept.
      phantomfive writes "In a recent test by a company called Veritest, Windows 2003 web server performs up to 300% higher throughput than Red Hat Linux running with Apache. Veritest used webbench to do there testing. Since the test was commisioned by Microsoft, is this just more FUD from a company with a long history? Or are the results valid this time? The study can be found here."


    Slashdot, 11th of May 2005.

    • Microsoft: 2k3 Server vs RedHat\Apache
      Posted by Michael on Wednesday May 11, @09:01
      from the oops-they-did-it-again department.
      fooslashbardot writes "Well, it looks like the suits at Redmond have done it again with the test last week that stated Windows 2003 Server outperforms RedHat\Apache by 300%. We knew the test had been commissioned by Microsoft, and now a recent Wired article has arose which lays claims that Mr. Gates himself was seen slipping the people at Veritest wads of up to 10,000 hundred dollar bills shortly before the announcements were made. Gates has denied all such claims, and says that Balmer smells of Cheese."


    I've never used either, or know anything about Veritest, so I haven't a clue about whether the results are likely to be correct or not. But we all know Microsoft :P
  • by wowbagger ( 69688 ) on Saturday May 07, 2005 @10:41AM (#12462035) Homepage Journal
    There is only one way you can get a "fair" test in a situation like this:

    1. Let Microsoft come up with a set of tests to be applied.
    2. Let RedHat come up with a set of tests to be applied.
    3. Compute the union of the two sets of tests.
    4. Let Microsoft specify the target cost of the hardware they want to benchmark on (C1).
    5. Let Redhat specify the target cost of the hardware they want to benchmark on (C2).
    6. Take the geometric mean of the two hardware costs (C=sqrt(C1*C2))
    7. Given C, let Microsoft determine the hardware to be benchmarked on, given the assumption that the purchaser of the hardware will be a third party buying from standard sources (e.g. NewEgg, Dell, IBM, whatever - but not eBay or the like - new hardware only).
    8. Again, given C, let RedHat determine the hardware to be purchased - new hardware only from recognized sources.
    9. Third party buys both sets of hardware and delivers it to RedHat and Microsoft.
    10. Microsoft provides detailed setup and configuration instructions for the test. Microsoft may have access to the hardware for the purposes of determining these settings. Setups are NOT allowed to use non-publicly available code (i.e. applying released service packs is OK, applying custom service packs is NOT allowed).
    11. RedHat provides detailed setup and configuration instructions. RedHat may have access to the hardware for the purposes of determining these settings. Setups are NOT allowed to use non-publicly available code (i.e. released updates via up2date are allowed, but custom code is NOT allowed.)
    12. Both sides return the hardware to the third party, who then verifies the hardware had not been modified (alternatively, the third party purchases 2 sets of hardware for each side, keeping one set.)
    13. Third party performs the installs as per the instructions for both sides.
    14. Third party performs the tests.
    15. Test results, hardware spec, and setup instructions are posted.


    This way, each side may tweak their setup to the max, using all specialized knowledge, to get maximum performance. Since each side may run the optimal hardware configuration (given price restrictions), the practice of hobbling the other side by picking ill-supported hardware is prevented.

    This test best conforms to the sort of thing an end user would do - pick the best bang for the buck for the budget and task at hand.

    Now, this might result in a dual Itanium server (Windows) being benchmarked against a dual Power server (Linux) (or some other comparison), but that is "fair" in that both sides are running on the same COST hardware.

    True, each side might "release" a new (service pack|set of RPMs) for the purposes of the test, but as long as those releases are publicly available, who cares? We all benefit from the improvement of the code.
  • Re:I like it. (Score:5, Interesting)

    by barneyfoo ( 80862 ) on Saturday May 07, 2005 @11:09AM (#12462162)
    Actually the ultimate test would be for an independant party to Sponsor a challenge.

    Each would team would get(windows and linux):

    $5,000 in cash with which to buy hardware and software. All purchases must carry a receipt and all parts must run to spec. No overclocking.

    Garunteed 5 9's power.

    Each Team's computer will be housed in the same independant facility maintained by Sponsor.

    The contest can last no longer than a year. Each team will be able to maintain their own server throughout the competition.

    The scoring will be simple. You won't lose points for having down time. Your score is simply the number server pages(the kind to be determined) you've properly served before your first moment of downtime. So if your server crashes before the year is over, the number of pages served up to that point is your score.

    Maybe someone has an idea for what a good server is to run.
  • Re:I like it. (Score:3, Interesting)

    by fireboy1919 ( 257783 ) <rustyp AT freeshell DOT org> on Saturday May 07, 2005 @12:32PM (#12462555) Homepage Journal
    No, that's still no good.

    Then you could be dealing with luck. You happen to get a bad batch of RAM and your server crashes? Sucks for you. The other guy wins. Somebody decides to get the other team to win via DDOS? Sucks. Other team wins. Random lightening strike? You see the problem?

    Plus it makes stability the ultimate concern rather than (possibly) throughput, which is clearly a benchmark in favor of Linux, since the OS itself is simply better designed (if for no other reason than because they replace the worn-out parts more often). If you go down for a minute every day, but only for a minute, will anyone care?

    Most likely not. Incidentally, thats about the length of time it takes for me to restart my apache install. Heck, I could run apache with xinetd without too much problem, which to me is kind of cheating.

    A better idea would be to separate these into two separate scores: one for uptime characteristics (including recovery time), and one for throughput.
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Saturday May 07, 2005 @02:12PM (#12463070)
    The "big players" can contribute cash, but not hardware. It is too easy for them to contribute hardware specifically enhanced for their product.

    #1. Each team gets X dollars and no restrictions on what it can buy. After all, that should be how businesses run their shops. We aren't comparing hardware, but total systems.

    #2. Each team must purchase the software off the shelf.

    #3. No team is allowed to recompile anything or to use any drivers, etc not available from a public server for the past 12 months. This might sound like a bad deal for Linux, but it will also stop Microsoft from re-writing the drivers. Again, most companies do not have access to that level of expertise so that won't be allowed.

    #4. Each tweak or configuration setting must be documented and a reference for it shown on a public website or manual. Again, businesses only know what they can read.

    #5. At the end of the competition, the other teams will critique each team's configuration. We've all seen the "tests" where Windows is running on a RAID 0 array which is beyond stupid for real production work.

    That way, each team can deploy the best system they can think of for the test. I'm sure you all remember MindCraft and their massive single server "test" for webservers when anyone else would have run multiple cheaper servers and gotten higher throughput.

    So, a test in run and the Windows team buys the biggest single system they can afford for the money. While the Linux team fields a dozen boxes booting from CD and one storage box.

    Which system would be "better"?

    Which system would be faster? Would that be the same answer under different loads?

    Which system would be easier to maintain?

    Which system would have higher uptime?

    Which system would be easier to scale up?

The one day you'd sell your soul for something, souls are a glut.

Working...