Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

C't NT vs Linux benchmarks : Linux wins 304

Anonymous Coward writes "Go check out this benchmark of Linux vs NT in a real life-situation. C't makes a pretty good point here, showing Linux/Apache to be ahead of NT in performance in daily life! Also compliments the Linux community for its responsiveness: "Emails to the respective [Linux] mailing lists even resulted in special kernel patches which significantly increased performance. " This is the C't benchmark that's been bouncing around lately-translated into English, for all of the German-impaired out there.
This discussion has been archived. No new comments can be posted.

C't NT vs Linux benchmarks : Linux wins

Comments Filter:
  • Posted by viperx2:

    A German magazine was outraged by these false claims, got the newest fastest version of German Linux, which I consider to be the best, fastest and most reliable linux, and did real world tests to see what happened. I have to hand it to the germans! Kudos!

    But what happens when NT5 comes! AHHH!!!! Get to work open source and freeware guys!

    Viper-X
  • Was that before or after National Lampoon's "Cohen the Boy-barian"?

  • I would like to see a benchmark between systems that people might actually buy for $40k. I know Intel is happy when people buy their $3000+ cpu's, but I think if a company is going to spend $40k on webservers they are more likely to get 5 $8000 computers. $8k still buys a lot of computer. You can get machines with 512MB and dual PIII's. It seems like a waste to throw 4 NIC's on a single machine when four smaller much cheaper machines can still do better.

    I am no expert, never having run a huge (slashdot or linux.com sized) web site, but I was a sysadmin at a small ISP and we never considered dropping that kind of cash on one machine. If anyone has economic justifications for buying one big machine instead of several smaller ones, I would love to hear it.
  • I'm not real sure either one is a "real" word.:)
  • Posted by viperx2:

    This is a dangerous obesession. But the linux guys aren't the half of it. Windows wants nothing less than total global computer domination. Anyone that disagrees is a fool. Linux was started with the notion of putting linux on a PC. Whoo pee. Who cares. Then There was the amazing free policy with linux.

    FREE? Since WHEN is software free?!

    Suddenly Microsoft aggressivly tries to force Netscape out of buisness, and crushes any competition with IBM OS/2 Warp, and makes macs look like jokes, when both are very capable operating systems. Do you think that this has changed?

    I recently saw the "Pirates of Silicon Valley" movie, and I am sure that all of Microsofts NT team is working 90 hours a week, around the damn clock with nothing on thier mind but making this OS faster, faster and more faster? Wrong. They are there to beat linux. To destroy it. As they have the legacy of Netscape and countless other software companys that we never heard of, because they were bought for code, supplies, workers, whatever.

    I feel that the Linux people are more comitted to having a goal to show Microsoft that we won't put up with aggressive tactics, false advertizing (case in point) and other things. I'm sure the list goes on and on.

    Microsoft is now "The Machine" that Bill fought so hard against back in the day. He stole. Linux will CREATE. That is only my opinion.

    Oh, and since I got into college, more and more I find that coding is art, not science. Ecclectic coding artists all over the planet, and I am sure that they are very passionate about thier cause. I hope to be one of them soon.

    So next time everyone is yelling at each other about which is better, think of it more like art critics. They can ramble on forever.

    Viper-X
  • This comment really applies to several of the posts, but I didn't feel like posting until I got this far down.

    If you follow the lkm (linux-kernel mailing list), you will see that several of the points mentioned have been addressed in 2.3.x. The one that I remember specifically is that throughput now scales linearly with the number of network cards. IOW, linux should perform nearly as well as NT with 4 NICS. I believe that the tcp/ip stack is/will be multithreaded as well, but I'm not certain.

    Avi
  • To give you a specific answer, I've found that MS Exchange and Visio, when run at the same time on NT 4.0 with SP3 or SP4, lock up about half the time. No idea why, MS has no idea why. But that's the kind of annoying thing that's pushed me to Linux.
  • How about a test where half the load is web serving, and half the load is file sharing?

    Better yet, how about a test which runs for 1, 2, 4, 8 and 24 hours, with these test bed loads. And say 1000 random static web pages.

    Let's see how long it takes for each web server to decrease due to memory leaks, and other problems.
  • I personally only use RAID 0 if th server is in a cluster, or in other words, there is another box there to pick up if a hard drive should crash. For instance, how about a MSCS Exchange setup? Two servers, two sets of data, why WOULDN'T you use RAID 0 at LEAST on the primary? You don't need all the internal fault tolerance when you have another box. Of course the cost......

    ALG
  • We have a small office using NT SBServer SP4 and 20 NT clients SP4. We have no professional system-operator.

    Status after few month:
    BSOD on NT clients while using Word97 with a large documents.
    BSOD on NT clients using Paint Shop Pro.
    100% chance for BSOD on NT clients sending a print task to the v1.0 HP4500 Color printer driver (don't try this at home).
    Reinstalled several NT clients after unrecoverable system-hangups while booting up in the morning.
    Rebooting the NT SBServer at least every 3 weeks (sometimes in the lunch-break) when it starts to have problems with file-serving and Exchange (cleaning the mess).
    Once the server just forgot all dial-up networking settings (used by Proxy- and Exchange-server).

    I suspect the only reason that the server isn't have more problems is that it runs no user-apps.

    We use a Compaq PII350 server with brand PII350 clients in original shape.
  • Actually, it said 'Freeware' in the original article too. Also, Open Source is called "Open Source" in German too. Furthermore, c't is certainly aware of the term so I'm a little confused myself why they put used 'freeware'

    chris
  • That's funny (tm). My NT web app server running w2k beta 1 hasn't BSOD'd or needed a reboot in 666 days. These false claims of NT instability are a favorite tactic of Linux/Apache advocates :(

    How did you get 666 days of uptime and still install w2k beta 1?

    Does Microsoft allow you to upgrade your OS without rebooting now?

    The wheel is turning but the hamster is dead.

  • Not true. I've seen much more port aggragation then I've seen Gigabit. It's cheap and more easily supported.

    ALG

  • Reminds me of that joke where this patient goes to a doctor and says something like:

    "It hurts when I do this"

    "Well, then don't do that!", the doctor replies.

    I think NT's reliability has been rehashed over and over, and I hear more complaints about NT crashes than about Linux crashing. I hear more glowing reports about replacing NT with Linux than the other way around.

    Then again, YMMV.

  • That's because you're running netscape. www.microsoft.com serves all IE page requests and then, if it's not doing *anything else*, serves non-IE page requests.
  • I don't think so... The problem with CGI on NT is with NT's hideous process model.

    Why would I want to use a proprietary port of a proprietary standard on my Linux box? Why would I limit my choices by using a "standard" that runs on relatively few OSs?
  • And I also suggest you read this article [zdnet.com], especially the part about "Unfortunately, perceptions of the Linux community are shaped by Web sites such as www.slashdot.org, where self-styled experts who have the collective IQ of an AOL CD post inflammatory propaganda." Linux will never beat Windows if you keep degrading it down like you do.
  • From personal experience, I *know* NT is that unstable. I worked at a place with 2 linux boxes and 2 NT boxes (they have more now but that's not the point). Linux ran just fine. Other than a hardware malfunction (which didn't cause a crash...it just slowed it down), linux would keep chugging for months at a time.

    NT on the other hand was hell. Lockups happened almost every other day, depending on how busy the servers were. It got to the point where we decided to just schedule a nightly process to reboot the servers.

    NT was serving web pages and files. Nothing else. Linux was handling mail, DNS, samba, FTP and whatever we wanted to play with at the time.

    To give then a *little* credit, I do think development time in NT can be less than Linux. However, the development time was far overshadowed by the hours spent troubleshooting problems caused by Window's instability.

    BTW, I think this post was a troll.
  • Are all the man pages and everything in German? I've used German and Japanese NT. I can't read those languages, but I'm impressed that software could be customized so much. I've read the NT is localized to 120+ languages and that about 70% of Microsoft's revenue comes from outside the United States.
  • The best thing about this is showing that the only thing slowing Linux down in the other benchmarks is the *four* network cards they added to serve *static* web pages. With one network card, Linux wins. Linux also does better with dynamic pages (and open standards).

    Therefore, we can laugh at all NT advocates that claim superiority in benchmarks due to (a) moving stuff into the kernel (b) superior design. (unless they want to use multiple network cards... hmm.)

    I guess the only thing we need to improve is simultaneously using more than one network card, but the static serving of web pages should not be the task that we need to improve it for... (ooo, benchmarking enhancements...)
  • In recognition of the Micros$ft campaign to end the so-called millenium with a benchmarked bang, I would like to name my latest theorem, the Millenium Theroem. Here it is, along with a two-part theorem which will be a joy to all you mathematically inclined /.ers.

    Th. (Millenium theorem, the): Proving NT is better than Linux is equivalent to factoring a prime.
    Proof: By construction. We will construct a method to factor a prime, but it will work only on NT, and not on Linux, thereby showing that it is inferior. So once the hard work of factoring is done, the superiority of NT follows automatically, which is the trivial second part.

    Write a program in to iterate through set of integers or increasing cardinality, i.e. start with all possible integer combinations of 1 (then 2, then 3...) integers, multiply them and compare to a known prime (see note 1). When this program in written in Visual C++ and run on a P-III box with at least 128 Mb RAM and 4 100 Mbit ethernet cards, it will terminate (see note 2) in less than 2 seconds. For any configuration of Linux (libc5, glibc2; xterm, rxvt; egcs, gcc, etc.) the inefficiency of the OS will prevent termination.

    The result follows.

    Notes
    1. A list of all known primes appears in the book "The Road Ahead of $$$" by William Gates Soph.
    2. Mathematically minded stupid Linux people will protest that termination is not guaranteed (the really stupid ones will insist it is impossible). These people should read the conclusive assertion about termination in the book "Discontinuities on a Mobius Strip" by William Gates Soph.

  • Okay. Say some company that comes out with a web server that can put out 1,000,000 web pages a second. Yeah. One _million_ web pages a second.

    But, there's one downside. The server is very flaky. It needs rebooting. You pretty much have to have a staff of 20 fulltime 24/7 people to keep it running.

    Meanwhile. Someone else has a web server that only puts out 750,000 wp/s. However, you can have a single person run the server from 8-5.

    Anyhoo. Granted. These are benchmarks and only benchmarks. I'd be more interested in seeing more 'real world' benchmarks. I don't know why, exactly, you'd spend 100k on a quad xeon, when you could have 10 dual xeons for that much and have redundancy and redundancy.

    And, I doubt that it's very realistic to serve all dynamic pages from a single box. As everyone knows, slashdot runs both from a single box, or has up until a few weeks ago. At around 500k hits a DAY it poops out. IF you were doing a large volume site than that, you'd need multiple db servers and stuff.


    Anyhoo.

    It just reminded me of the Hulking Giants and the Priesthood of the IBM computers from the early 60's. A company called Digital came along and didn't require all of the people to maintain the computer like the Hulking Giants required.
  • I started with Linux with Volkerding's "Slackware 96" book and CD set... (kernel 2.0.0) Mind you, I had some experience with Solaris, IRIX , etc... Most of the stuff was irrelevant to me, but on scanning through it recently, it covered all of the really useful starters, and didn't really deal heavily with development. I believe Volkerding's still releasing a new book with each new release of Slackware. With the exception of rpm's (should be using source tgz's anyway) and the RH net setup thingys, it covered just about everything you need to know to get going. I know Chapters up here (Calgary, Canada) stocks a large number of beginner to intermediate Linux books - Just go into a big bookstore, like Chapters or Indigo - most of the books are reasonable.
  • by redelm ( 54142 ) on Wednesday June 30, 1999 @03:43AM (#1825296) Homepage

    c't (IMHO the only independant mag left) has done much more realistic testing (page sizes, static vs CGI,load, SMP) and reported their full results. At less than 1000 hits/second, Linux soundly trounces NT.

    But look toward the end of the article: with dual 100 NIC's and 1000+ hit/sec loads, NT pulls ahead. Clearly, something could be optimized further in the Linux TCP/IP stack or ethernet drivers. Perhaps finer grained kernel locking? Maybe we should thank Mindcraft for helping debug Linux! I'm sure it was by accident.

  • The integers 1 and the known prime shoudl be left out by definition, of course.
  • FreeBSD's SMP support is still new, and from my simple experimentation appears to be similar to Linux 2.0.x -- one big lock. I dunno about Net/Open. BSDers feel free to correct me if I'm wrong.
  • Perhaps we should just admit that NT is, for the momment, a better high performance web/file server. It is no shame in giving Microsoft their due. Whether they can continue to be the front runner is the question. Apache/Linux is a fine combination for webservers on a T1, which describes most of the webservers out there.

    The real question is, do you want to come in on the weekend to reboot NT? ;)

    Coding is more productive than benchmark anyway.
  • Item: Walked in one morning to a new client I was doing some work for. After the initial "here's the computer room" talk, we walked in and I hit the Windows key. System hard-locked. I expected the client to run out, I even got my car keys ready. "it does that all the time". Not a moment of concern from him.
    Item: We have an NT box at work. When getting files off the Samba server, it crashes. It was a factory install, from a very popular reputable vendor. It is designed to be as untweaked as humanly possible (its a testbed for a client). Therefore going in and optimizing everything would be against policy; but its crashing and 2 NT guys can't figure out why.
    Item: Another NT box here crashes randomly about once every week. All it ever runs is Netscape and a few things like Office and Pagemill. It is likewise from a reputable vendor, untouched internally.
    Item: An NT machine at a client site went down. The box didn't have a power button - NT is so reliable, after all. I drive in to reboot it. I push the reset button. Nothing. I have to crawl behind the rack (I'm 6' tall) and unplug it. Customer response: "That happens sometimes." Apparently the short guy wasn't there to fix it.
    Item: My Linux machines have rebooted only due to 1)kernel upgrades or 2)extended power outages that strain the UPS. And the occasional HD upgrade or other hardware change. The NT boxen in question were running simple stuff; they were just sitting there, running IIS or Proxy Server or whatever. They weren't running a zillion apps. They weren't on boxes thrown together from spares. They were machines built to conform to the HCL, with the idea of servicing big clients, who do big things. You, sir, apparently have a Magic Dog.
  • I think your point number 3 is exactly what this benchmark is supposed to say to the NT zombies who had been recently droning on about how NT is faster than Linux on all hardware configurations about the MindCraft tests.
  • NT5 Beta 1 was released in September 1997.
  • not for me, mind you, but for the guy I'm installing for.

    he doesn't program, and isn't exactly fluent on his windows partition either. I figure setting up RH6.0 is a good option becasue a) I have it b) you don't ever /have/ to see a command line c) I don't have Caldera :)

    he's really into this, however (which is great), but I don't know a good reference to steer him to for everything from /very/ basic command line stuff on up... any suggestions? he's been reading the websites mentioned (plus slashdot :) but I think he'd do better with some form of paper...

    Lea

  • Well, this may be one of the things that slowed linux down a lot but this is not the only.

    Hopefully Linux 2.4 with the new scheduler, a lot of patches, a threaded IP stack (this seemed to be one of the things that slowed Linux) and a way tobind a card to a CPU and Linux can severely improve it's performance.

    I don't say that all these things are planned for 2.4 but since these seemed to be the major bottlenecks in the kernel there probably will be some people to work on it.

    BTW: it really seems that MS searched a flaw in Linux, did a biased benchmark to show Linux in a very poorly way. They biased the benchmark so the Linux community respond by asking to re-run the benchmark where they would lose again (becauseof the Linux flaw they found).
    This may be a little bit paranoiac but this is a good try from Microsoft.

    And also I want to thank Microsoft to found a research lab to found Linux weakness so we can fix them ;) (we can see that they needed to go to some high end hardware (for the x86 architecture) in order to beat Linux. isn't it a compliment???)
  • Being a complete newbie, can someon point me to a site or tell me exactly what makes Linux et el different to other os's and how it is used as opposed to say (I hate to say this word) Windows?

    Ta from a hopeless girl.
  • So they could also use the English version of NT and include English SP5.

  • It's great that some nice folks are trying to show how little the Mindcraft study means when viewed in the cold light of reality. We can also expect some other nice folks to show how Linux and NT uptimes and general reliability compare in real world settings.

    Unfortunately, it won't matter much in the PHB world.

    I imagine that for the next 5 years or so, whenever I am in a meeting where server issues are being discussed, the pro-MS types will repeatedly and consistently drag out the Mindcraft study to back their claims and nobody will want to hear about any other study talking about real world conditions. If I do bring them up, somebody will say "Oh Mike, we know you are an anti-MS bigot, your studies are nothing but sour grapes pressed out after the Mindcraft benchmark clearly showed Linux to be inferior."

    Am I being too cynical?

    Think about it, Mindcraft benchmarked the specific scenarios where NT performance is better and left out all other scenarios. Clearly even before the "study" MS had run a whole suite of tests and chosen the specific scenarios to be publicly benchmarked by their independent vassal. These results are then broadcast loud and clear to all the check signing PHBs, and the Linux folks have to acknowledge their validity because the Linux camp willingly participated in a skewed study. We got duped and from now on, this will be THE valid study of NT vs. Linux. All other tests will be too late and too bad. Vexed to nightmare by MS-Marketing.

    *sigh*





  • by EisPick ( 29965 ) on Wednesday June 30, 1999 @03:57AM (#1825312)
    An important point gets lost in all the discussion about these benchmarks: Both NT/IIS and Linux/Apache perform astoundingly well, and both perform much better than they would if the other didn't exist. Developers for both sets of products borrow good ideas from the other, and both race to make improvements to keep up.

    MICROS~1 flacks like to blather on about needing their monopoly position in the market to protect their "freedom to innovate," but where they have no competition, they don't innovate. Why can't they acknowledge that the only reason IIS doesn't suck is because Apache exists?

    I wish they had similar competition on the desktop. If they did, maybe I wouldn't need to reboot my Win98 4x/day.

  • Sorry, appearently I failed to avoid being unclear: I love the Gates quote because it makes him sound stupid. Right: either 1 and p don't "count", and there are no factors, or they do, and they're the only ones. Either way, it's not the intractable problem that it is often mistaken for, and on which RSA encryption is based.

    I wasn't referring to the Gates quote, however, even though I mentioned it in passing. What I am amazed, amused, and a bit depressed at is how often people here make the same mistake in their /. posts. I remember a recent thread (too lazy to look it up, though) in which a gentle reminder didn't even work (I wasn't involved in this exchange; I just remember reading it.) It went something like:

    >>>>>[...] how to factor large prime numbers [...]
    >>>>I can factor large prime numbers in my head, instantly. Try me.
    >>>Oh yeah? Factor [some large number].
    >>One, and [the large number], assuming it's actually prime.
    >Doh!

    See what I mean?

    David Gould
  • I guess people replying didn't really read what I wrote. I said ``[RAID 0 is] quite common in many shops that need high reliability, because you don't use software RAID or internal RAID controller cards in such systems. You use external RAID boxes.'' [Emphasis added.]

    External hardware RAID is a heck of a lot more reliable, and usually faster than, software RAID and in-box RAID controller cards. (A typical setup has two SCSI controllers in the host, each hooked to one of the two controllers in the RAID box, so you can lose a host controller, a cable or a RAID box controller without going down.)

    cjs

  • because even though Ive been using computers (amiga then pc) for 18 years, I have no idea what makes os's different from each other.

    I can use almost any windows based progrm and master it within an hour or two (this isnt bragging, just fact) but tell me to learn programming of any type and I get a little daunted, so ok, I taught myself basic when I had a little 64, then learned some DOS thru necessity, and mastered HTML cos I was damned if I couldnt design a website better than any I saw back in 1994 when I first got online.

    Then I decided DESIGN was where I was at, not development, but then I found out you need to combine the two and I had to learn all kinds of new applications.

    Then the gods who are Macromedia invented these great little things called ShockWave and Dreamweaver and I didnt have to learn all the new stuff anymore.. whether this is a good thing or not, well... thats for each individual to decide...

    The bottom line is thanks to my incredible laziness mixed with a determination to do everything for myself and better than everyone else, I find that I NEED to be able to use Unix/Linux effectively and not have to run to the guy behind the box :) Love Ms Jute aka Natalie Domestic sites: http://www.geocities.com/Paris/5380/cyberhussy/
    http://www.geocities.com/Paris/5380/
    Commercial site: in progress : launch date about end of july
    ICQ: 200390
  • The 666 days (number of the Beast) should be enough of a giveaway...
  • Our NT sysadmins say that MS currently recommends you stay with SP4 unless there is a particular problem with it that's fixed with SP5.

    I agreed with the earlier comment that the tests should really run for many weeks at a time - I've just spent 2 weekends debugging an NT box that keeps crashing, but more to the point, if SP5 has performance features they are no use if they cause crashes during a test like this.

  • The heavy load is mostly due to heavy rendering
    dual PII 300's

    I'm no longer worried about it because the
    rendering software has been ported to Linux.
    The crashes would happen after about 3 day's of
    rendering 24hrs. This happens to NT in alot of
    animation shops. They are not meant for
    high end computing...service packs are in place
    and all related hardware is top notch.

    glad that you have fun with NT but for some reason
    all of my UNIX workstations (under the same 24/7
    load as the renderfarm of NT's) rarely hiccup.
  • There are versions of Linux in a vast array of languages, from French and German to Korean, Chinese, Japanese and Icelandic - the latter I think illustrates how open source is crucial to smaller language populations, since Microsoft very publicly refused to do an Icelandic version of Windows.
  • Our company manages several very large database-driven web sites. There are applications where you need a single, very large system, supported by multiple web servers. For example, a banner network may have multiple low-end web servers cranking out banners, while there is a big database server at the core, on a private switch with the banner servers and a bunch of NICs in it.

    These situations frequently go far beyond what even the most expensive Intel-based architecture can do. You can't get something like a Sun E6000 with Intel architecture for any amount of money.

    So yes, there are applications where you have to plop down $80k on a machine, but it usually isn't straight web-serving.

    In our experience, it is better to load-balance between cheaper and more numerous webservers than fewer, more expensive webservers.
  • Why, pray tell, are you "hopeless?"
  • It's amazing, amusing, and kind of depressing how many people get this wrong, especially on /., and especially on a page where I've already seen a sig of Bill Gates' immortal quote on the matter.

    Once again: factoring primes is meaningless as a problem, since the factors of a prime p are 1 and p. Factoring numbers in general is more interesting. You could even say it's easy, since factoring a number n is O(n*log(n)). It's only exponential in the number of bits, and I've always thought it was a bit weird to be impressed at the fact that the value of n is O(2^(log(n))). However, since log(n), i.e., the number of bits needed to represent n, can be set arbitrarily, it is hard to factor large numbers, but only because they are really large numbers.

    It's easy to write down a number that's so big you can't count to it -- try it.

    I know this is simple stuff, but people keep saying it wrong. They probably know it and are just speaking carelessly, but it really is a dumb mistake to make. Just say "factor large numbers".

    David Gould
  • Factoring the product of two prime numbers, however, is not so easy..

    :-p
  • Enough people have problems with NT that it would be silly to claim that NT doesn't share some of the blame.

    What are you running on it when it crashes?

    They are running NT, when NT crashes. If you've managed to run NT without a problem for months, consider yourself fortinute and move on. Other people are obviously having less luck with NT.

  • It crashed because of something you did.

    That's what he said. He right-clicked on "My Computer" and it crashed. Ergo, it crashed because of something he did.

    Talk about obvious...


    ...phil
  • I remember reading that the performance of Beowulf clusters is made possible by chaining together multiple 100Mbit NIC's, the exact configuration that the Linux benchmarks performed so poorly with.

    Looking at www.beowulf.org [beowulf.org], I see that they do use special software to achieve high performance with multiple NICs. In particular, Beowulf Ethernet Channel Bonding [beowulf.org].

    Has anyone tried this in a web server environment? Maybe this isn't the best long-term solution, but for now, it should kick ass! Perhaps c't could be convinced to try a retest?

  • They replaced the outdated 2.2.5 with a newer 2.2.9 (that has since been superceeded by 2.2.10)
    Nothing special at all, more like a service upgrade. They did indeed also upgrade NT with service pack 4.

  • That's funny, QWin2k beta 1 hasn't been around for 666 days.


  • Using SP 4 instead of 5 makes sense, because Service Pack 5 was intended to be more of a bugfix release than anything. It's more stable (IMO) than SP4 or SP3, although there are known problems with RAS and a couple other things.

    However, the tests used Service Pack *3*, which not only is seriously old, also misses several enhancements and security holes along the way.

    They did manage to upgrade the Linux kernel to 2.2.9, noting that the stock 2.2.5 kernel in their distribution was slower.
    --
  • While I believe that a CGI vs. ASP comparison might be interesting, from a maintenance perspective, ASP is a piss-poor answer. I wouldn't suggest that _anybody_ should lock themselves into a proprietary closed standard that limits them to _1_ operating system. The reason this can become a big problem comes alongside with your second paragraph. The fact is, that in any sufficiently complex real-world application, your web server is really only a small part of the equation. Your back end needs to be there. Why would you limit what you can put on the back end by tying the (relatively unimportant) front end to a specific operating system?

    Any developers who use "MS-designed dynamic content method[s]" are making life more difficult for them and their successors, when they need to look at other non-MS solutions. Essentially this person has tied future developers to one of a) stay with MS for all eternity b) Redesign the site ground up. If you were an IS manager, how would you respond to that?

  • Do you have a URL that indicates that Microsoft used SP4 instead of SP5 for the ZD tests?

    --

  • I don't know if the processor-NIC affinity is so ridiculous. It actually does make alot of sense for those bizarre moments when you will have >2 NICs.

    Apparently it will be enabled by default in Windows 2000, so I wouldn't call it cheating either.
    --
  • "Personal Home Page". The origin of PHP was in constructing the author's web page.

  • Unless you are serving to an Intranet, in which case you might actually have a switched 100Mbit network.

    Multiple NICs might not be that common for web serving, where speed can be gained through HTML+application design more easily, but it's used all the time for other things.
    --

  • Gee, seems like it takes more skill to run an NT box efficiently than a Linux box.

  • Stability and robustness are not just about how a system behaves when there's nothing wrong. Stability and robustness include how a system behaves when presented with problems. As for "running some strange random program from a noname dev shop," there are few (user space) things that I've seen choke a *NIX system that hard, regardless of who developped them. I vastly dislike the Microsoft apologist method of blaming any and all problems on third party hardware or software. An operating system should behave properly, even where an application does not.

  • I found it kind of strange that Linux performed better with SMP turned off than with it turned on, unless you applied "special patches".

    http://www.heise.de/ct/english/99/13/186-1/pic10 .jpg

    Consider that there's been a bunch of SMP improvements done the Mindcraft test, and that it took an unreleased SMP kernel to beat a released Uniprocessor kernel. (Are the "special patches" in 2.2.10).

    No intent to spread FUD, but perhaps Linux's SMP support isn't quite ready for prime time. The numbers seem to look that way.

    (Perhaps this could explain the cognative dissonance between the Mindcraft/ZD results and the average Slashdotter testimonial? How much better would Linux have done if they just turned off the SMP support?)
    --
  • > I'm expecting this to show up in the Microsoft Zealot camp (where do MS Zealots hang out, anyway?).

    comp.os.linux.advocacy

    They opened several "hahaha" threads within hours of the announcement of the results.

    Oh, yeah. And some of them habitually use the kind of insults and scatology that Mindcraft published as as "portrait" of the Linux userbase.

  • However, the tests used Service Pack *3*, which not only is seriously old, also misses several enhancements and security holes along the way

    Nope. Read it again. The box that was shipped to them had sp3 installed, and they installed sp4 themselves.

  • Slashdot isn't all that reliable. I have problems with it all the time. (Connection refused, partial pages, broken HTML, extra blank stories, etc...)

    I know people who have been 'bugged' hate to hear this kind of reply, but...I'd bet a mountain of money that I've spent as much time on slashdot as you have this year (I'm too ashamed to admit how much) and I don't remember having any problems with the site at all.

    Isn't it possible that the problem lies in some corrupting influence somewhere between the internet and your screen?
    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • It is apparant that NT whoops up in SMP mode anyways... but i wondered about that myself. I still dont see it making THAT much of a difference anyways
  • crash an OS?
  • Mindcraft do a test that shows NT beats Linux. The Linux community cries fowl. ZDNet offer to do the test again, this time with both MS and Redhat/Linux guys there. The Linux community cries fowl.
    Some guys go and do the test them selves to show Linux is better, without Microsoft present. Linux community cries WHAT A WONDERFUL benchmark. Perfectly done, and oh, look, Linux won.
  • I suspect that PHP/Apache/Linux would blow the doors off of VB/ASP/IIS/NT

    Actually, PHP's performance isn't that great at the moment for high loads. However, it *is* good enough, and the flexibility it gives is sufficient that I for one use it on my sites. Either way, Zend [zend.com] should send PHP performance through the roof in the near future. They have some simple ASP v Zend benchmarks on the site, too...

  • Personally speaking, I came out against the initial Mindcraft benchmark because it was shoddily done, with an eye to producing the numbers Microsoft wanted. There were demonstrable errors in the running of the benchmark that heavily favored NT. I also have serious problems with Mindcraft the company (or Mindcraft the guy); the benchmarks prove that he has absolutely no integrity.

    I don't have as much of a problem with the second set of numbers, although I think the c't tests showed that the platform and setup were carefully chosen to stack the deck against Linux (besides being completely impractical - you're going to serve enough data to keep four 100MB cards busy, and you're installing a RAID, but you're not going to protect any of the data you're serving with a RAID 1 or 5? Right...)

    I also didn't have much of a problem with the earlier PC Week benchmarks that positioned NT in a good light - although I still think that, if they're going to evaluate ASP, they need to evaluate mod_perl or mod_php as an equivalent.

  • Well - it's not really ugly when you like green. And for Siemens it is really good design anyway. On the other hand - who ever sees a server?

    Bernd
  • Several reasons for people not believing the Mindcraft benchmarks have been given.

    One rather important one seems to have been left out:

    Our own personal experience suggested completely different results.

    If you're standing in a room with a green chair, and somebody walks in and says, "Hey, what a cool zebra you got there", odds are you're going to disagree with them.

    That's what happend here.

    scottwimer
  • Um, RAID 0 actually reduces reliability.

    RAID 1 (mirroring) provides high reliability. RAID 5 (striping with parity) produces high reliability and more space with a speed penalty. RAID 0 provides more space and lower reliability, since any one hard disk failure caused the whole array to fail.
  • I did this several years ago when I was new to UNIX... There are several things one must know before its easy to find more help...

    I suggest just going to any old bookstore and looking at various intro to UNIX books. Before picking one at least read the table of contents and the first couple of pages to get an idea of the quality...

    I believe I got an Osborne book at the time and I thought that it was fairly helpful, but from what I remember, it was geared more towards things like awk/sed/sendmail/etc than ls/cd/cat/man/more although it almost certainly mentioned those (I think I skipped those sections). From what I remember, it also gave a decent intro to the UNIX design philosophy (lots of little tools) which is a must.

    The best thing to do might just be to write basic commands and their meanings on a sheet of paper and be sure to tell him about man... As long as he has patience and isn't afraid to try things out (read: doesn't run as root... possibly not as himself if he has important files), he'll learn that way...


  • Well, this probably would be interesting from atechnical viewpoint but Linux is in the limelight, FreeBSD is not. The managers that are afraid of using Linux now would probably be even more afraid to use FreeBSD because they haven't heard of it. This is not because of technical merits but because of press coverage.

    This is good to have Linux opening the door to other free software. Maybe in one or two years people will be more used to fre softwares and may begin to try *BSD or other free software (not only OS's).
  • Please check out Netcraft's Web server survey [netcraft.com] where it is quite visible what's been happening since Mindcraft-1.

    IIS goes up, Apache goes down.

    This is the exact opposite of the previous trend over many years, which just goes to show that Microsoft's Marketing Department haven't lost their mind control quite as much as many people believe. FUD still works. Meaningless benchmarks showing that my mini accellerates faster than your Ferrari (carefully ignoring the fact that the mini was being driven off a cliff and the ferrari was towing a double-b up a steep hill) still work.

    I think it's time that people just give up advocation and accept the fact that there will always be stupid people, and those stupid people will easily be duped by marketing departments of large corporations. Stupid people deserve to run Microsoft Windows. Let them run it. Let them put up with its incompatibilities, its pathetic security, its poor performance, its total instability, its lack of standards conformance. Their smart competitors will soon crush their business. Their systems will run into the ground. It happens on a daily basis all around the world already. And what do the smart consultants say when they're called in to fix the problem?

    I told you in the first place you should have run Unix. By the way, my fee has tripled.

  • by Anonymous Coward
    I think denying benchmarks until you find ones that match what you want is a Bad Thing(tm). While these may be 'more fair' Linux still has a number of problems. Scheduling, and obviously some IP stack probs.
  • You didn't look carefully enough at the survey. The June numbers show IIS's percentage of the market declining for the third month in a row. And while the graph shows a downtick in Apache's numbers after nearly two years of uninterrupted gains, this is entirely due to Europe's largest Web hosting service reconfiguring their Apache servers to report as "CnG Webspace Server - based on Apache (Linux)" instead of just Apache. Put those machines back into the Apache fold and Apache would have displayed another .42% market share gain.

  • Right, SMP support was also lacking, but since that didn't slow down the dynamic pages any in comparison to NT, it isn't really a benchmarking problem here, just something to improve. :)

    Although performance didn't scale linearly, and the curve goes down some on both Linux tests, most notably the 4-CPU test, 4 CPUs under Linux still did at least 4 times better than 4 CPUs under NT, and that says a lot.
  • What exactly is it that you're amazed, amused, and depressed at? We're making fun of Bill G. precisely because his quote doesn't make sense, either if you leave 1 and p in, or leave them out. So your remaining discussion is nice, but irrelevant.
  • "Coding is more productive than benchmark anyway"

    A good bug report is very useful and this can be harder to find people being able to fill a good bug report than to code.

    a benchmark is not a bug report but the aim is to test a software/hardware under different kind of pressure.
    When there are enough details on the config this can be useful to detect flaws, so this can be useful too.
  • It seems that the battle between Micro$oft and the Linux community would never end. I just want to know who's own the truth?
  • by FutileRedemption ( 30482 ) on Wednesday June 30, 1999 @04:04AM (#1825387)
    Read harder.
    And think harder.

    NT was significantly faster than Linux in one "pretty much" unrealistic benchmark.

    Linux was massively faster in two "somewhat" unrealistic benchmarks.

    Linux was slightly faster in the other benchmarks.

    So please tell me why do you think that NT/IIS is "a better high performance web/file server"???

    The point is that NT is optimized for one single case, possibly only needed by something like a mega high volume porn site (static pages, as Alan Cox pointed out), and linux does better in all other cases.

    And if you want to server MANY files, you need to buy TEN NT boxes instead of ONE Linux box.

    For the T1: Linux is a fine solution for a 100MBit site. A T1 is 1.5 MBit.

    Please do not confuse the facts.
  • by tgd ( 2822 ) on Wednesday June 30, 1999 @09:32AM (#1825388)
    Well a few points:

    1) NT tends to not handle high loads as gracefully as Linux. Linux/Apache tend to slow to a crawl under high loads, but I've never managed to crash the server. I've done that a bunch of times under NT (usually when running SQL server and IIS on the same machine... but running various combinations of Oracle, Sybase or MySQL with Apache under Linux doesn't seem to cause a problem...)

    2) I've found the most unreliable NT servers are the ones that people have been hacking around on, tweaking, etc. Vanilla NT with everything else carefully installed seems fairly stable. Mind you, I'd never run a serious application on one, but you CAN get them working reliably. Its harder to keep non-administrators (ie, clueless management) from messing around on NT servers than Linux servers. (I once built a sandbox system that ran a clone of the system in a chroot'ed sandbox with the logins on the first six vc's pointing at it, with one on nine pointing to the real system -- I guessed that the owner of the company I was working for at the time was monkeying around in the system and that's why Linux kept crashing. After doing that, the sandboxed system kept flaking out, but the production one stayed up!)

    3) Buggy COM objects and ISAPI objects are prone to crashing various parts of NT, like IIS and for whatever reason, causing bluescreens. Lots of sites use not-so-stable third party COM objects and ISAPI's and from experience, it can be a real bitch to figure out whats causing the server to crap out under high loads in that case. The last major website I built using NT, I ended up rewriting all the COM objects we'd bought in Java so I had source and could fix the bugs that were my fault. :)

    4) Slashdot isn't all that reliable. I have problems with it all the time. (Connection refused, partial pages, broken HTML, extra blank stories, etc...)

    #4 isn't a flame at slashdot -- god knows I spend enough time on here commenting on things and reading the site. Slashdot is amazingly stable for the way its architected (running on a single server, no redundancy at the server level or the hardware level, etc...) I wouldn't run a high traffic site I was paid to build like that, but they're doing great for bootstrapping the site themselves.
  • NT running Apache? Seems that would eleminate whatever advantages Apache might provide.
  • I have never had a problem with slashdot.
    not once.

    I used to be on a modem now I'm on a 10base
    network connected to a T3 and I haven't had
    problems with slashdot....

    godjob.
  • Comment removed based on user account deletion
  • Question: How long has your system been running without a reboot? Yes, I know you haven't BSODed in months, but if you power down or reboot every day, it proves nothing. Even Win9x can manage (most days) to run for a day without rebooting.

    NT Workstation is much better than 9x as a workstation OS. But as a server, it still doesn't cut it.
  • No, I know both C and perl and like both quite a bit. I use them for different things, but when speed isn't an issue and text processing is, I absolutely love perl.
  • Was w2k beta really released 1 year and 10 months ago? And they still haven't released the actual w2k? And this is a real server which gets used significantly all the time?
  • by kwalker ( 1383 ) on Wednesday June 30, 1999 @04:29AM (#1825437) Journal
    Something that bothered me about the Mindcraft studies that was partially explained in the earlier article posted here about saturating a T1/T3 on a single-processor Linux box, and still further explained in this article...

    If NT is such hot stuff running a webserver, how come so many NT servers die horribly when they're slashdotted, yet slashdot (P2x2 256MB ram if I remember correctly) has enough processor time and bandwidth left over to customize the interface and most of the pages that it spits out? I have seen so many high-traffic NT sites bog down and sometimes just not respond when they get busy, yet most Linux/FreeBSD servers keep chugging right along.

    I wonder if there's a way to benchmark that...
  • I love the web - it is the great equalizer. Bad benchmarks like Mindcraft can be shot down in quick order. However there is one test that would have crushed NT in BOTH tests. It is simply this: Conduct the test over a 6 week period.

    Having worked in an ALL NT house and now in an ALL UNIX house I can tell you that the NT/IIS server will crash NO LESS than 8 times in 6 weeks and require hours to fix/restart. That has been my experience at a company that had 80+ NT servers doing real life web application work.

    I used to complain that LINUX/APACHE was no match for NT/IIS because the application platform from Microsoft is simply amazing. I've since seen something called PHP3 and that looks as good if not better than IIS. Does anyone have any experience with PHP3? Is it very powerful?

    --Pete
  • by Anonymous Coward
    How come they didn't use Service Pack 5????

    They use the latest and greatest Linux version.
    Yet they failed to use SP5 for NT?

    SP5 has many performance enhacing features
    for multi-cpu configurations.
  • Yes PHP is an amazing programming langage for the web. I use it every day at work, and every day I find a new functionnality I did not know and I'm happy. The developpement is still going very fast on PHP3, with PHP4 on its way to the beta for the end of the month.

    For example with PHP you can generate on-the-fly gifs, but also on-the-fly pdf files ! With database integration think about all the possibilities you have. I'm working (among other things) on a FULL template system which not only allow you to change the HTML of a page, but also all the images/buttons, without having to recreate them all. You can use true type and postscript type 1 fonts in you gifs. The only problem with the generated gifs is that they are RLE compressed so I pipe them to gifsicle and I get a maximum compressed file.

    Btw do not compare IIS with PHP. IIS is a web server, PHP a scripting langage for apache (and CGI). If you want to compare, go with ASP. There are many pages comparing ASP/Perl/PHP. I hate ASP so I won't speak about it. I'm a C developper so for me PHP with its C syntax is a dream (no mallocs !). Perl is not structured enough for my liking.

    I think every web developper should give PHP a try. It changed my life 8)

    J-F Mammet
    webmaster@softgallery.com
  • Hmmm. Better than all the rest? I don't think it's necesarily better than all the rest.


    Compared to most other operating systems it has pros and cons. Compared to windows however it's significantly better in a number of key respects. The much decried text interface is the key here.


    First Windows integrates the windowing system into the basic operating system. If your windowing system is FUBAR you can't do anything except reboot. Text based operating systems OTOH allow you to log in and use the text based utilities to find and fix the problem. It's possible to stop and restart the windowing system without rebooting the whole machine.


    Secondly the use of plain text gives you great flexibility. The utilities supplied with most Unix like systems, including Linux, generate and process plain text. If you want to find all files containing a particular string and change that string you have a utility that finds and lists files according to certain criteria, e.g. filename ending in .txt. The output from this can be fed to another utility that checks whether those files contain the string you want to change and a third utility that actually makes the change. In windows on the other hand everything just creates another window. There is to take three programs that each do part of the job and chain them together to do the whole thing.


    Generally Windows makes the easy jobs easier (provided you want to do them the Microsoft Way (TM)). Unix and Linux make the hard jobs easier.

  • by Ristoril ( 60165 ) on Wednesday June 30, 1999 @05:22AM (#1825493)
    I hate Microsoft. I think their software is bunk.

    With that out of the way, I do have an observation that I believe is worth consideration.

    When Mindcraft came out with their benchmarking tests, this place (as well as their mail server) was flooded with 'what a bogus test!' 'you MS whores!' and the venerable 'go f*ck yourselves!'

    However, when these benchmarks come out, and say that Linux beat NT, they are automatically heralded as The Truth. Now, I really do like the fact that Linux has been 'vindicated', but what guarantees do we have that these tests were any less biased than the ones that said NT won?

    I know a lot of you will think I'm a heretic, but we need to present an image of being clear-headed observers. The way not to do this is to automatically discount every benchmark that says NT is better while automatically accepting benchmarks that say Linux is better as God's Own Truth.

    Just so I can be sure you guys understand, I'll reiterate:

    1. Linux rules
    2. Microsoft sucks
    3. A benchmark is not trustworthy merely because it agrees with your beliefs
    Ristoril
  • ASP's are slow. See the benchmarks done by the mod_perl people on perl.apache.org. NT is notorious for slow dynamic content. Unless you write everything as ultra-optimised ISAPI dll's you'll suffer the same fate - as I've experienced to my great embarassment - gladly I've vowed to never take *that* route again... :)

    Matt.

    perl -e 'print scalar reverse q(\)-: ,hacker Perl another Just)'
  • IIRC, the "486 and higher" PCs were client machines making the http requests to the test server. An http request from a 486 taxes a server just as much as a PIII's request.
  • by FutileRedemption ( 30482 ) on Wednesday June 30, 1999 @06:12AM (#1825541)
    - pretty much everybody uses one of the configurations c't tested

    - pretty much nobody uses or will use mindcraft's setup (raid 0, 400 mbit net connection, 4 way xeon server to exclusively serve static pages)

    - like probably no other magazine in europe, c't is renowned for independence, objectiveness, competence

    - what is known about mindcraft is that they did another test some time ago, seemingly with a setup advantageous for NT (against Novell Netware)

    and:

    - the mindcraft test was payed by microsoft, mindcraft conducted the test in a microsoft lab, mindcraft used microsoft email accounts

    - the c't test was payed by Heise verlag. And by the way: Heise runs Solaris.

    If you are really objective now, what will your conclusion look like?

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...