Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Red Hat Software Businesses

Are Linux Transactions Slower Than Win2k's? 218

FullClip asks: "In the July issue of PC Magazine, Red Hat Professional is compared to Windows NT/2000 on basis of ServerBench, which tests the maximum Transactions Per Second (TPS) for a given number of clients. Red Hat 6.1 (when tweaked) matched the performance of Windows, but showed a terrible decrease in performance at about 24 clients to a weeping 20 % of the level that Windows was able to maintain. Somehow this disturbs me. Doesn't Linux perform better than that bad in client-server environments? If someone can point me to an non-FUD benchmark site, it would be appreciated..." Is this yet another case where benchmarks have been skewed severely to show a deficiency that doesn't exist? Or is this another area where Linux needs improvement? [Updated 6 July 2000 2:15 GMT by timothy] You may want to compare this with the far different results reported by SpecWeb.
This discussion has been archived. No new comments can be posted.

Are Linux Transactions Slower than Win2k's?

Comments Filter:
  • by Anonymous Coward
    You people should never pull your Cox out at the last minute.
  • by Anonymous Coward
    After a quick look through the benchmark (their executables aren't stripped), fascinatingly, it appears to make (extensive??) use of AT&T shared memory constructs (ie shmget, msgsnd, etc.).

    I've never looked, but (given the age of the manpages) I wouldn't be surprised if this code isn't heavily optimized. I suspect this because, in my experience, very few free software programmers actually use these constructs (portability??). As a result, there wouldn't be a huge push for fast sV IPC.

    Furthermore, I wonder about scalability of IPCs. A long time ago I ported a program (making heavy use of IPC) from AIX to Pyramid's flavor of Unix (does anyone remember what it was called?). The kernel default message queue size only allowed 8 messages while AIX allowed 256. As you might expect, this created a minor scalability issue.

  • by Anonymous Coward
    You know, seeing benchmark results on this site really makes me laugh. No matter what, it seems like people on this site think that Linux is DESTINED to win every benchmark in the known universe and when it dosen't, the Test must be flawed, skewed, or paid off in MS's Favor.

    My Personal favoritre was the Mindcraft tests, simply because in the end Mindcraft PROVED that it was a Linux problem. And they are still blasted for that. Thankfully Red Had did something about it and are finally getting the benchmarks Linux shoud have got in the first place, but it would have never happened if they did what 99% of the Linux community did and blamed Mindcraft.

    Personally, going into a lull and believing that a particular software is untouchable is a disaster waiting to happen, because eventually its going to be overtaken by a competitor unless something is done about it.

    -Slashdot: News for Linux. Nothing Else Matters.
  • The server was a dual-proc machine

    No it wasn't.

    It was a dual-cpu capable machine (as all E60's are) but only contained one CPU hence it wan't a mutil-threaded stack issue.

  • Did you notice they have instructions on how to install the program under Linux that are available in Microsoft Word format?

    Check this out.

    It's a SELF EXTRACTING WINDOWS BINARY ARCHIVE.

    Fortunately, it's a self extracting windows binary ZIP archive, so I used "unzip" to uncompress it.

    But then I tried to open it...

    Makes AbiWord crash.

    mswordview says "this is an unsupported word 7 doc, sorry
    this converter is solely for word8 at the moment
    "

    which I found weirdly funny.

    I finally got it opened with StarOffice 5.2, but gee.. what a pain.

    How about plain text, or HTML guys!

    Also, I couldn't get to thier license agreement link.
  • This is the saddest case of completely missing the joke I've ever seen on /.
  • It's just interesting that the SpecWEB tests showed Linux to be way ahead, yet ZDNet's tests (I still think ZDNet is in M$'s pocket) show Linux to be behind. Who do you trust? Hmm?
  • >Basically, Linux people want Linux to be able to >do everything that Windows can.

    so far, so good. But then:

    >They want it to be a robust server operating
    >system. They want it to be an easy-to-use client >operating system. They want it to run everything.

    So which one of these is Windows supposed to be able to do? :()
  • Guys, this *ISN'T* a conspiracy. As configured, the 6.1 box reached some sort of bottleneck. THIS ISN'T FUD for crying out loud. Sure, tests don;t mean much, but it failed it.

    Do you guys/gals know how *BAD* we as a cummunity look when every time we're beaten on a test, we cry FOUL??

    There will end up being a reason as to why this happened, but for crying out loud guys..
  • Security: Again, tests have proved that Windows is far more secure. I even remember one being posted here on Slashdot some time ago (btw congrats to the Slashdot editors for posting this one - until then I had thought that Slashdot was a very biased site)

    NT and security are two things that don't match. http://www.securityfocus.com/vdb/stats. html [securityfocus.com]

  • by backtick ( 2376 )
    WTF is the idea of using RH 6.1? 6.2's been out for how long? And the newer kernels from the 'updates' tree, like 2.2.16?

    Sheesh.
  • Or could this possibly be a downside of Redhat... What? You mean Linux can sometimes have downsides too? Hmm, who would have thought...

    I've never argued that Linux was perfect, but the nice thing about it is that when there's a deficiency in Linux, it can get fixed rather quickly. I guess a problem could get fixed rather quickly in Windows, too, but one would have to wait for Microsoft to A) admit the problem was there and B) put out a free fix for it. With Linux, odds are that someone will come out with a fix very quickly.

    If memory serves, Linux had a patch for teardrop in 48 hours or something like that, and it took Microsoft a couple of months. That seems awfully long on Microsoft's end, but that's what I heard.
  • WindowsNT, that has been a multitasking _multithreaded_ operating system for years and years

    Yes, but the basic OS is still only single user, unless you're prepared to spend bucketloads extra on Terminal Server or Citrix stuff.

  • PC Mag's guilty of using an OLD version of an OS (Hey, they've shown themselves to be that clueless before in the past- why not continue the trend? :-)

    Anyhow, the SPECweb figures are due to a machine that appears to have been running the pre 2.4 kernel, the Red Hat RawHide, and a nifty little high-performance web server called TUX that they GPLed that seems to outperform most of the stuff out there. Tidbit about TUX- it's a kernel extention not unlike the KNFS server.
  • Just to clear things up, you're probably referring to Windows 3.1 when thinking about the 'single user, non-multitasking OS'. ENTIRELY DIFFERENT code base here; NT was built from the ground up (at least the important parts) separately from the Windows 3.1/95/98/ME family.

    --
  • Ahhhhh!!!!!!!!

    Everyone is looking at this the wrong way. Why is everyone so concerned about speed? Are you aware that speed is not the only issue on computers? Gosh, the fact that Microsoft may have done something right somewhere is not that hard to believe. Why does everyone look at speed so much? Things like this make people forget why they chose Linux in the first place. That reason is freedom. That's the advantage over other systems. Having this freedom _usually_ produces better code, but it doesn't always. So what? With our freedom, we don't have to worry if RedHat puts out a better/worse system than Microsoft. If RedHat/Linux sucks, we can hire someone to modify it so it doesn't suck so much. And, if those modifications are worthwhile enough, we can sell them ourselves. The reasons most people choose Linux are freedom, openness, freedom from being vendor-bound, and customizability. Because of these things, we usually have scalability, performance, and security as well. But those are not the important issues. The fact is, you have the freedom to do things with Linux that you could _never_ dream of doing with Windows simply for legal reasons. This benchmark may be skewed. It may not. It may even be skewed in favor of Linux. But the fact is, I have my freedom, they do not. And my freedom is not something I deal lightly with.
  • Actually, people in the commercial world DO choose Linux for freedom. Where I work we use it because we have very wierd requirements (which are constantly changing), so we have to be able to fully customize anything. The benefits of freedom are not purely philosophical. It gives lower costs because of vendor independance, and gives an organization flexibility as far as deployment is concerned. For example, you don't have to keep track of licensing requirements (just keeping track - not including paying the costs - is a major overhead for an organization). Our company uses Linux for freedom, because freedom is very practical.
  • > Why would you link to that? According to the very first table, Linux (aggr.) has 147 vulnerabilities over the four years listed, while NT has 146. Granted, that doesn't say much for Microsoft, but Linux is not a secure OS (most distros are as bad as Microsoft in enabling useless, potentially vulnerable services in the default install). Linux can be MADE to be secure, but then so can NT, so there's really no point to be made there.

    Scan down a ways, you'll notice that for 2000, the top two are Windows 2000 and Windows NT. for 1999, the top 12 are all Windows (NT, 98, 95, IE, etc). Of particular note is that Windows doesn't show up at all in the 97 & 98 lists. Somehow I don't think it suddenly broke. I suspect it wasn't being reported on bugtraq. Just because it isn't reported doesn't mean it's not vulnerable.

    On your second paragraph, I agree.
  • I wonder if it has anything to do with it running DB2, all of the other systems that were benchmarked were running Oracle 8.x. And the performance is soo vastly different for a little quad processor intel machine, I find it hard that it spec'd higher than any UN*X, even an Alpha Cluster, Sun's Starfire, HP's V class, and IBM's own RS/6000. If nothing else the sheer redundancy of the other setups would have me choose it over the intel setup on w2k.

    And I'd also trust Bea Systems' Tuxedo for a TPM than I would M$'s COM+. Let's just say I'd rather use something battle tested when my job is on the line. (Yes I do use Linux at work, just not in my production critical back end database)
  • businesses aren't into the upgrade every week (or even every 4 months) paradyne

    So what do they make of better-than-quarterly "Service Packs", some of which break their systems and lack of which leave "rape me" signs up on each network interface?

    Betcha the results are miles apart if done with, say, Mandrake 7.1 - and another quantum leap if you plug a 2.4.0pre kernel in. In short, by the end of the year, nothing Microsoft does will bring those tail-lights any closer.
  • Here is another result posted on zdnet

    The gentle swishing noise of reality vanishing out the door.

    throwing up [...] is cake on IIS

    Amazing what a bit of editing can do. (-: Sorry... back to the plot...

    throwing up a simple vb or c++ COM object for scalability is cake on IIS

    Whereas on Zope or PHP, the better design makes such kludges pointless?
  • >PS Did I tell you that Microsoft will release an
    >operating system in the next 6 months that has no
    >bugs at all, is as fast as hell, and has a 500kB
    >footprint! So much for your Linux!

    Nonsense! That Microsoft Linux version was an
    april fool joke!

  • You obviously don't use or haven't seen a Caldera eDesktop system boot. It boots like the HP workstations boot, with a line of text describing the process and a status:
    "[ ok ]", "[ wait ]", or "[ fail ]"
    all wrapped up in a pretty GUI. HP did it in text but Caldera's would NOT scare any MS Windows user.
    The login screen is pretty straight forward though many ludites.... I mean Window-ites wouldn't understand what they were logining in for.
    IMHO

    Funny though that at my July 4th party, 2 friends who recently bought computers said they paid $1500 for email and web browsing. They told me to shut up about the $99 IOpener I had showing photos of past events.....
  • As I am sure you are aware, Linux refers to just the Linux kernel, which by itself isn't of much use to the majority of users.

    Yeah, it's supposed to be a POSIX compliant system. Maybe we should call it X/Open Linux or something ? (-;

    Remember, without a great visionary such as RMS as our leader, there wouldn't be any Free (as in speech) Software and the world would be a much worse place.

    We'd have BSD and its associated license and variants ( like the artistic license ) with or without the "free" software foundation.

  • I would be very suprised if either OS had the lead in all aspects of webserving. If they're running on the same hardware, they're likely going to have similar performance - both systems have been developed by a lot of intelligent people, and are sufficiently advanced now that it's unlikely that they're just throwing performance away left, right and centre on stupid stuff. The areas that they perform well will probably be different due to design decisions they've made, but overall I'd be suprised to see a huge discrepancy between the two. Which one you want will always depend on what you want to do with them.
  • It's behind because you are way to used to online media. PCMagazine and their kin have a deadline for an issue several *months* before it is published. Paper is a much slower, and drawn out process.
  • Which is why Apache pre-spawns processes. If I recall the defaults correctly, Apache tries to keep a minimum of 5 and a maximum of 10 idle worker processes at all times. (If it falls below the minimum, it spawns 1 process in the first second, 2 in the next second, 4 in the next, etc. to avoid swamping the machine with process creation overhead.) Also, on Linux, forking new processes uses a copy-on-write scheme which significantly reduces the overhead involved.

    Context switches between processes take longer than switching between threads, of course, but the difference is far less under *nixes than under Windows. Interestingly enough, Win2K has much better process-switching optimization than previous versions and MS is now talking about (may have even released - I don't follow them that closely) a version of IIS that runs multiprocess instead of multithreaded to improve stability.

    But yes, you're right - none of this does anything about the TCP/IP stack.

  • It's also worth noting that until June 30th, Microsoft Windows 2000 running SQL Server 2000 on a Compaq machine had taken the price/performance lead AND the sheer performance lead. Alas, I haven't read SPEC's rules yet, but their disqualified numbers are here [tpc.org].

    Yes, I know... I found this out on the Register.

    But it looks like IBM's toasted those disqualified numbers anyhow... cool!
  • Well, that explains the relative levels of stability and security of the two very nicely. 20 years (actually more) of proven technology == good, reinventing the wheel based on DOS == bad. Thanks for summing up :)

  • ...Linux that barely supported threads at all until not too long ago.

    In general the Linux model is to make forking a new process extremely cheap, so that you can just use new processes rather than threads. Threading hasn't been there since the beginning because there hasn't been as much of a need for it.

  • The point is not "stability above all else", the point is that in OS design as in other things, you can either learn from the mistakes of the past or repeat them yourself. There's no reason to sacrifice stability just for a snazzy user interface, and there's no reason to sacrifice security just to get singing and dancing attachments in the mail. That's why 20+ years of design and industrial usage of an OS (if you consider all of Unix as one OS, which is a vast simplification) is a good thing, not a bad thing.

  • Here is another result posted on zdnet, the test was by doculabs.

    http://www.zdnet .com/eweek/stories/general/0,11011,2290989,00.html [zdnet.com]

    It basically shows that a C++ built com+ object running on Win2K smoked away every other platform (hard to believe I know..) Doesn't make a whole lot of sense but this seems to be coming from a good source.

    I think MS has been doing a good job putting up some competition in the web server market. Apache is nice, but it's so simple to get ASP up and running on IIS.. (granted, PHP on Apache isn't bad either but throwing up a simple vb or c++ COM object for scalability is cake on IIS).

  • Many questions pop up for this test...

    How much of the test relied on static pages how much of the test was dynamic pages?

    There is a Linux kernel module to speed downloading static pages. It's just a remote file copy and there ends up being additional overhead when a user space application (Web server) dose the work. The same module passes dynamic pages to the server.

    Linux however dose a decent job on dynamic pages.

    If your website is graphics heavy then your proformence for Linux should be less than NT but if you have a lot of CGI,PHP etc type content you should preform better than NT.

    The number of users (at once)... Can Linux or NT be trusted to such a high load environment? I'd think if you get 20 or more hits at a given moment you'd want to consider Solarus or other high end system.

    To make it clear... It should take about 2 seconds (at worst) to get a page out. 20 people at once. So say half that for a normal load.

    10 for every 2 seconds. 5 per second.
    5 * 60 = 300
    300 pages a minute.
    18,000 pages an hour
    18,000 for 8 hours a day = 144,000
    [8 hours instead of 24 becouse people sleep]

    Thats a lot of traffic...

    My math may be a bit off but even with a few mistakes and bad assumptions you are dealing with heavy load when you expect a system to serve off 20 pages at a given moment.
    if your dealing with an odd burst the system should be able to handle it given no new traffic.
  • If you're going to base your decision on server software on how many millions of transactions it can make per second then perhaps you should try management.
  • Every E series Netserver I've used (45's and 50's) drops to a crawl once > 500Mb RAM is added. Up to 500, it's snappy, but the BIOS, and presumably the caching, or lack thereof, give the infamous Large Memory condition of Linux. I have not yet tried hurling a Gig at my LH4 beastie...
  • I believe that the root of this problem lies in the I/O models supported by Linux. Basically, we've got the following:

    1) Blocking I/O (used with threads)
    2) Nonblocking with select/poll/something else
    3) Crappy POSIX aio_* functions (does Linux even support these?)

    Windows NT has blocking and nonblocking of course, and has what they call asyncronous sockets that work based on a message queue, but it also has what is called I/O completion ports, which use overlapped I/O. I'm not an expert on these models (yet) but they are the de facto model on NT to support thousands of concurrent connections. It uses a mixture of threads and asyncronous operations (not message queue based this time) so, say, 64 clients are handled by one thread. As apposed to Apache fork()ing whenever a new connection comes in. Yuck.

    I've searched for info on making a similar I/O model on linux and have come up with a few references to IOCP on the linux kernel mailing list, but it doesn't seem to have gone anywhere.

    If someone could share more information... please do so.

    khaladan
  • Don't know, but the utilities still have the user interface of the Berkeley Unix command line code.


    ...phil
  • also probably issues with the VM management could be issues here as well. Its well known the the Linux Virtual Memory manager ain't the best. The BSD ones seem to perform the best, but Solaris 8's new algorithm is pretty impressive too.

  • Is this yet another case where benchmarks have been skewed severely to show a deficiency that doesn't exist? Or is this another area where Linux needs improvement?


    We'll never really know, but let's have a purile 400 message discussion* while we don't find out!



    * In a very loose sense of the word

  • It seems hard to me to believe that Windows, that wasn't even really a multiuser multitasking OS until not too long ago, is now smoking the hell out of Linux at what the Unix comminity has been doing for years...

    That'll be because your mind is shut. I find it easy to believe that WindowsNT, that has been a multitasking _multithreaded_ operating system for years and years is now smoking the hell out of Linux that barely supported threads at all until not too long ago.

  • I trust the little itty tests that I conduct on my own, in load situations for my particular situation.
  • I've had a read of the article, and I've read the serverbench page... Now, I may be being a little dense here; might have missed a hyperlink which would have explained it, but:

    What sort of client/server are they testing? HTTP? SMB? FTP? SMTP? POP3? I can't see anything which specifies this... and until that's known, nobody can comment on the results.
    --
  • The NT kernel itself may be wonderful, but nobody's seen it since the Microsoft Backwards Compatiblity Dumptruck unloaded Win32 all over it.

    This isn't meant to be a troll. Windows NT would be a far better platform if they'd just drop the "Windows is part of the core OS" part of it. Put it in user space completely.

    (In other words, I agree with you, I'm just expounding.)

    --Joe
    --
  • The answer is simple. Many of these tests are just marketing efforts by one company or the other (mostly MSFT) and so the reason for the 'tests' is to attack a competitor or THE current competitor. Today that is Linux. If the Linux kernel v2.4 is really THAT good and I was MSFT (I'm not), I'd get as many 'benchmarks' out now while the 2.2 kernel is all over the place. They are going to have to keep shut or lie once the v2.4 kernel is shipping. I've seen this happen with LanServer 5 years ago and only once was someone dumb enough to compare it with MSFT and NOVELL. If I remember correctly, WarpServer out performed the other 2 with 1 CPU while the others were running 2 CPU's. It was only 'benchmarked' that one time and IBM's marketing deptartment is run be Beavis and Butthead types.

    Anyway, it comes down to marketing and Linux IS the competition for the Microsoft Marketing Company.... ;/

    IMHO
  • Just out of curiosity: Did anyone else get the impression that the RedHat system was configured to handle the SCSI adapter using a loadable module? Is there a performance disadvantage to running your SCSI drives using the loadable module as opposed to having the driver resident in the kernel? I would have thought that they'd rebuild the kernel and include the driver in the kernel.

    Also, why not use a Pentium optimized distribution, like Mandrake, instead of the generic 386 oriented RedHat? All these magazine testers seem to do this; apparently when they think of Linux the gears in their heads turn only once and they come up with RedHat (no offense, RH, I use your stuff and am happy with it).
    --

  • (Note: I poked around a bit on the ServerBench site, but was not able to verify the server software used under Linux. I assume it was Apache, but it may have been something else.)

    Although Apache is single-threaded (prior to 2.0, when multithread becomes an option), each request is run in a separate process. If one request stalls, it will still give up its CPU time for other tasks just as nicely as it would under a multithreaded Windows server.

  • Sorry.. but NT has been multitasking and multiuser from day 1. I mean this from the kernel point of view, of course. The overall implementation, and the functionality available to users hardly qualifies it as multiuser.. but the kernel is just as aware of multiple users and multiple tasks, if not *MORE* aware, than a unix kernel. The NT kernel was a good thing from the start. It is MS crappy use of it to build an OS that is, well, crappy.
  • Heh, reminds me of a story I heard about a sys admin of a VAX cluster (iirc). He was lamenting to the DEC tech. support guys that the performance was great until there were about 33 users logged in, then everything would go to hell. He suggested to the DEC guys (in jest) that they go tell the software writers to grep through the code and when they see the number 33, change it to 50. :)
  • Tweaking is.
    If for example Matrox (or anyone else) cheats in their video-drivers, to get better frame rates in Quake,
    this could lead to instability, graphical glitches etc.
  • Why are you making the assumption that ZDnet is
    a more serious publication than Linux Journal?
    Why do you even make the assumption that ZDnet
    is less biased?
    ZDnet is a huge publicationcompany that makes about 95% of it's computermagazine-income from
    selling windows-mags.
    PC Magazine (is it the largest?) is around 99% windows and 1% other. It is obviously much less biased than Linux Today.
    Some of their articles are created with the sole purpose of angering Slashdot-readers, thus getting
    plenty of readers (because you just have to read
    the garbage), and generating income.
  • If post 34 [slashdot.org] is genuine then this whole thing is worthless. There are IO limitations with Linux but does this highlight them? Who knows.
  • This is what I want to know too. There is no mention of what is being servered and by what software. If Serverbench is both the client and the server then they need to show why linux is the problem and not the Linux port of SB.

    All looks a bit dodgy to me.

  • >Of course, manufacturers with no morals (ATI, Megabyte, Intel) can optimize for these types of benchmarks, and thus seem faster than they are

    No matter what benchmarks exist, _most_ vendors will to some degree optimize to those benchmarks. This is as true of Quake3 or your COM+ example or SPEC or AIM as it is of WinBench3D. Interestingly, the easier it is for the vendors to look at and understand how a benchmark works, the more specific their optimizations will be, and the least subvertible benchmark would be one where the vendors have _no idea_ what it'll do until it's run. I know that open-sourcers aren't going to like that idea, but there it is.
  • >everyone was really quite shocked at the figures coming out, and went to some trouble -- including talking to Red Hat -- to attempt to eliminate configuration issues

    Interesting. Everyone was on Mindcraft's case for _not_ involving Linux vendors in tuning efforts. Now they go out of their way to do so, and everyone's saying it looks suspicious. While everyone's talking about rigged benchmarks, how about analysis that ends up with one side accused of cheating no matter what they do?
  • From the article
    "All 48 clients were connected to a Hewlett-Packard ProCurve 8000M switch using 100Mbit/s full-duplex connections."
    This certainly is not a real-world situation.
    "Our test server was a dual-CPU capable Hewlett-Packard NetServer E60 fitted with a 550MHz Pentium III processor and 768MB of RAM"
    Not too bad. This is the sort of box that I'd buy if I was stacking a rack of these things. However, the fact that this was an HP system which was not pre-loaded (and thus not pre-configured for performance) with Linux is typical but annoying. I'd love to see benchmarks of the sort that really matter. Also, why two NICs (dual NICs are useless in most production environments unless you have a back-end data network, which is a very different load picture than these folks were seeing).

    Some food for thought: what is a transaction? Their FAQ doesn't seem to cover exactly what it is. If all they're testing is static page serving, it's about as useful a test as seeing how fast it can delete files....

  • I am bit confused about your WDM ideas. All of Win2k's drivers are WDM. Miniports use a WDM compliant port driver with some new miniport calls. Using "native" nt drivers are what Microsoft calls legacy drivers. The presense of one of these drivers breaks power management on the machine because the legacy driver can not answer the request to change system power state.
    If the hal (much smaller in win2k), pci, disk, class, scsiport, ndis, etc. are all WDM compliant. WDM is the native model for Win2k. The comparison to Win32 is not correct. While a Win32 call is really just a wrapper for the native nt call, WDM is usually not a wrapper for legacy calls. It is often the opposite; most legacy functions are just macros that call the new WDM functions.
    The main points of WDM on Win2k are cross platform design (and binary compatibility in many cases) and bringing power managment and plug and play to nt. PM and Pnp are considered very important at MS; they are not just political marketing ideas. Although this new model helps the new developer making products for 9x and win2k, it hurts current nt developers by forcing them to rewrite their drivers. In many ways, WDM actually slowed the developement of drivers for Win2k. Finding classes on a moving spec was a difficult task.
    My main issue where Win2k is not tuned for performance involve the use of so many general drivers to handle whole classes of devices but none of them well. IMHO MS would have been better off supporting I2O like almost every other OS does. In order to get the best performance, any storage developer will tell you that you have to write a full port drivers to replace scsiport/miniport and disk/class.
    NDIS is similar. In fact, it is worse because MS will not certify a non-NDIS network driver. Here, the developer is forced into a slow model. NDIS 5 addressed some issues (removal of some locks, some off load support) but left many others.
    It is this push towards a miniport model that I find hurts performance. Also, the messaging scheme for drivers requires system calls between each layer instead of using a direct function pointer interface. Almost all messages must go all the way through the stack, even when most drivers just blindly pass the call down. While this allows any driver or filter on the stack to change things, it slows performance. Many messages must be handled "on the way up" the driver stack. Whole stacks have to wait on events to be triggered and callbacks called. Microsoft gets extendability through the support of upper and lower filter drivers at the expense of performance.
    A faster model would reduce the layers in the device stack, use direct function calls rather than system calls for message passing and reduce the need for callbacks and event waiting.
    As for reading NT 3.0 documentation...well that's great for discussing NT 3.0. Win2k is their current model and it has changed in ways that are more than just wrapper functions and cosmetic changes.
  • NT "runs all its servers in kernel space"? Do you mean drivers? Services? Services are not run in kernel space, althougth they can be set with a high priority. All drivers are run in kernel space, though.
    "The kernel has a lot of design concessions that faccilitate a really high I/O rate." Really? Have you looked at the code for network and storage? I have written network and storage drivers for NT4/Win2k and is not designed to be fast. Check out the DDK. Both storage and network use a miniport model (SCSI Miniport and NDIS Miniport) with a port driver doing much of the work. To make matters worse, Win2k use WDM for its drivers. WDM tends to add an additional driver object to the layered model. Both miniport and WDM are designed to be very general and take control away from the driver developer. A call to read a few bytes from the disk goes through so many layers. First, the file system drivers, then class.sys, then disk.sys, then scsiport.sys, then vendorscsiminiport.sys, then hardware. There can also be any number of filter drivers in the mix. WDM allows upper and lower filters for each FDO (Functional Device Object). We got a nice performance boost by not using the SCSIMiniport/Class driver interface. Win2k is not designed to be fast as much as extendable and general.
    Just my $.02...
  • TPC benchmarks require the system to be available to be purchased within a certain time frame (6 months?).

    The vendors usually use the latest possible software (unless it has performance issues!).
  • Though the machine was single proc, the threading could still have something to do with it. If one of the threads was stalling (a transaction was taking to long, it was blocking on some kernel call, etc) the other threads would be sheduled in their place, and performance wouldn't be so bad. However, in a single threaded design, when one transaction stalls, the whole thing stalls, and thus performance suffers.
  • For those who are complaining about them using an "old" version of Linux, get over it. The 2.4 kernel is not only still experimental, it is not in any distro yet. As for the system software being at 6.1, again it doesn't really matter because RedHat 6.2 (The latest from the "Linux company," at least as far as the mainstream is concerned) only contains a few tweeks, and is really not much of an upgrade.
  • Okay, now you're into debatable territory here. From my point of view, NT is a microkernel. Yes it runs most services in user space, and the servers in NT are conglomerated into the executive. However, they are still seperate entities, and communicate by message passing. In my opinion, taht constitutes a microkernel.
    Having high-performance I/O is a good thing. I think you misunderstand me, I really do like NT, in fact I use it about as often as I use BeOS. I was responding to your comment that NT was designed to be extendible over being high-performance. As for my backing up that NT is tuned more for performance than flexibility, I kind of explained that in the next paragraph, (running services in kernel mode, DirectX in the HAL, etc.) Even if WDM is the "official" driver API for NT, it is still not the native API. Win32 is the official API for NT, however it is not the native API. If you really want to see your applications perform as well as they can you'd use standard NT drivers, and the NT Native API. Neither are sanctioned by MS, but this is a technical discussion, not a political one. The truth is, that MS had to heavily endorse WDM because it needed drivers for NT. Even if it isn't the fastest way to do it on NT, that's the one they had to support. And I think the whole point of the WDM is that drivers should be cross compatible, except for video drivers (which aren't written to the WDM.) I think integrating DirectX into the HAL is a good idea. However, it is quite a complex system for something in the HAL, and does introduce bugs and make the system less "clean" from an acedemic point of view. I was using this point to support my assertion that NT is more tuned for performance then generality. Instead of adding a more general HAL bypass system, they chose to simply allow DirectX to pass through. NT does run a lot of services in kernel mode. In other microkernel OSs, there is no executive. Stuff that is in the NT executive (like I/O managers, etc) traditionally run in user mode. However, by running these in kernel mode, NT gains a performance increase. Also, that is one reason why NT blurs the lines between a monolithic and micro kernel. And I'm pretty sure NT is a microkernel. NT was designed as a microkernel, and its subsystems communicate by passing messages. (If it walks and quacks like a microkernel...) However, the design concessions (for performance) that MS made, made it much more of the "macrokernel" that MS holds it is today. However, if you read the documentation from NT 3.0 or 3.1, you'll see that it is billed as a microkernel.
  • NT was designed with a driver architecture. WDM is not it. Even if all NT driver development have shifted over to WDM, it is still not the optimal driver architecture for NT. The architecture that was originally designed to support the original driver API is not easily changed, thus WDM is sort of a bolt-on API for both Win98 and NT. Same thing with DirectX. DirectX is also a bolt-on API to NT, and thus will never be as good as an API around which the NT architecture is designed. Its probably less of an issue for DirectX, because it really just shoves the OS out of the way, but a non-native driver API can cause some performance problems. Maybe the new WDM model is MSs move to make NT a little more general. However, the NT 3.x architecture is still mostly present in Windows 2000. That architecture was designed mostly for performance, thus any discussion of the Windows 2K architecture should still make sense if you talk about the NT 3.x docs. Also, Win32 calls aren't wrappers. Win32 applications make Win32 calls to a user-space Win32 server. The server then communicates with the kernel and kernel level subsystems (via the NT native API and message passing) to implement these calls.
  • I was kind of kidding about that ;) I was talking about stuff like what Megabyte did with their Voodoo2 card. They wrote drivers that accelerated part of the Direct3D geometry pipeline. However, no games at the time actually used the geometry pipeline, except (curiously!) WinBench3D! The cheats don't really cause stability faults. More often, they are sections in the driver that detect certain types of benchmark code, and then optimize for it. Or they do what NEC did with PowerVR. At some point, the PowerVR drivers simply detected WinBench and returned artificial numbers! I was kind of pointing out that it is hard to cheat on benchmarks like Quake.
  • It's supposed to read

    throwing up [h]is cake on IIS
  • In that case, check out the last set of Win2K results [tpc.org] which Compaq/MS have withdrawn, presumably because either a) they've been overshadowed or b) they want to put something even faster up. Effectively, at this moment in time, Win2K has the top three performance slots and the top ten price/performance slots.
    --
    Cheers
  • I find this rather odd given this [slashdot.org] story from a couple of days ago. Maybe the SPEC group knows a thing or two more about setting up Linux than the PC Magazine guys do. Or maybe the PC Magazine guys know a thing or two more about setting up Windows. Or maybe a bit of both -- PC Magazine's traditionally been a Windows shop so I'd expect their Windows know-how to be much more advanced. I don't know anything about the SPEC group, though.
  • Your making a couple of leaps in your reasoning here.

    The Unix community has been doing the "multiuser, multitasking thing" for many years. And for some of those years some developers have actively been seeking the best performance possible. (And at other times, especially earlier in Unix's history, people have been working toward the "small is beautiful" goal more than looking toward developing high performance environments.) Linux has been around for a few of those years, but since it is a reimplemntation it might not be on par with every performance tweak of every Unix developed ever.

    You can't say that since Unix is a mature platform with lots of work has been done to make various versions of Unix very efficient, that Linux, a reimplementation of Unix, should be very efficient as well.
  • I take it you did not notice this on the bottom:

    Microsoft also used different Web server and database server software [and hardware] than other vendors, so results for Microsoft are not comparable to the other results.

    Hrmm... that casts a different light on things, IMHO.
  • This just went up on the TPC website Monday, there is a monster leader in transaction processing price/performance and that is:

    IBM Netfinity with Intel Xeon processors
    IBM DB2
    and Windows 2000.

    You will not believe this unless you see it!

    [tpc.org]
    Read'em and weep
  • O man... this is so damn funny. 2 days ago every single Zealot falls over the other crying Win2k is 3 times slower than Linux, today the same 'advocates' can't stop talking and whining about how another particular benchmark is CRAP because it doesn't state what they WANT it to state.

    Am I allowed to laugh about this? :)

    If we just give the advocates/zealots/other craptalking people a separate forum, we can then go on with talking about Stuff That Matters(tm) over HERE. Thank you.
    --

  • Anyone else notice that NT 4 performed better than Win2k? Hmmm, maybe microsoft will start learning the reason why things like VI are still around since the seventies in any *NIX distro: They Work!
    Newer does not always mean better, but it does mean that you lack all the special features
  • View Here [slashdot.org]
  • Where the hell does this drop at the number 20 come from? It's like, good...good...good, DOH! 20, BOMB OUT! It's not even a power of 2! *VBG*

    This really makes little sense. Also, what about other Linux distros, is RedHat optimized in some way that causes this?

    I can hardly see why Linux would just go apeshit once 20 users is met. I mean, you think that you'd see more of a slight downward progression than the floor falling out.
  • They were serving applications...
  • They said they used Linux 6.1 in a couple of places....... Since its an odd numbered version, I guess that means its the experimental branch. Still, their time travlleing skill are quite impressive.
  • I think a couple of things need to be specificly pointed out here. First off, with this particular benchmark, its indicated that a single 'client' actually represents several dozen users in real life. So, RH 6.1 Pro takes up to 20 'clients' before its score dips, I'd certainly like to know what that equates to in real life myself. Perhaps there's just a small bug in Red Hat that reduces its performance in what appears to be a very synthetic benchmark. Secondly, the testers who performed the benchmarks didn't appear to know what they were doing to well. Can we be sure that they truely had their Linux box customized and tricked out for these tests? No, of course not. Third, don't discount the possibility that these scores may be entirely accurate. Even a broken clock is right twice a day. -Algernon
  • by tzanger ( 1575 ) on Thursday July 06, 2000 @03:48AM (#954666) Homepage

    That is the biggest problem with the fast pace of Linux upgrades, vendors don't have the luxury of 20 billion bug breakers for their code, they have to spend lots of time verifying that their code works against any upgrades.

    And this is why they used Win2000 instead of WinNT4?

    If you're gonna pit the top of one and the middle of the next, I ain't even gonna look at your benchmark. They used Win2k, so (in my mind) should have used Linux kernel 2.4.1-prewhateveritistoday.

  • by cxreg ( 44671 ) on Thursday July 06, 2000 @07:28AM (#954667) Homepage Journal
    If you find this hard to believe, check out these mind blowing statistics [sexcowairlines.com]. Who would have ever thought?!
  • by be-fan ( 61476 ) on Thursday July 06, 2000 @04:49AM (#954668)
    Blah blah blah, this is mindcraft all over again. People are used to benchmarking. They're used to not having to hyper-tweek the systems they get in. In fact, its an industry practice to benchmark the systems "out of box" meaning that even stuff like the Diamond ViperII, which would be a great card with updated drivers, recieves a poor review due to the fact that what shipped wasn't up to snuff. If RedHat is attempting to compete in the commercial marketplace, they have to play the game. Don't ship products untweeked. Take a look at the default workstation install of RedHat 6.1. Why is Sendmail running on my workstation? I don't even use sendmail. Samba? What? at daemon, I've never used that. Chron, nope not that either. INET, I'm not serving anything. NFS, I've never even SEEN an NFS drive much less used one. Now I'm sure there are tweeks that the ZD guys could have done to maximize the performance. Would it increase the TPS that much? Maybe, maybe not. Remember, Win2K is much more multi-threaded than Linux, and tends to stall less on these kinds of things. The point is, if those tweeks exist, they should be part of the default install. It was increadible that until Mandrake, there were no mainstream distros that shipped with hard-drive optimizations on. That's akin to not clicking the DMA button in control panel, something that manufacturers get lambasted for in reviews. If RedHat is going to make it in the "real world" they have to play the game. Believe it or not, polish counts for a lot. In the high end business market, and especially in the desktop (home, business) market, lack of polish and be a deal-breaker for an otherwise great product.
  • by be-fan ( 61476 ) on Thursday July 06, 2000 @05:22AM (#954669)
    First, let me say, that benchmarks like this are useless. I'm not against benchmarking, mind you, but what I AM against is artificialy benchmarks. For example, in testing 3D cards, ZDN still uses an artificial benchmark called WinBench3D. Of course, manufacturers with no morals (ATI, Megabyte, Intel) can optimize for these types of benchmarks, and thus seem faster than they are. However, if you do a real world benchmark, like say test the FPS in Quake, you're results are actually valid. If a manufacturer cheats so their card runs actual games faster, then thats actually a good thing. What I'd like to see is a real world server benchmark. Maybe set up a COM+ simulation where actual COM+ applications (for example a database) does actual requests on the server, then measure how many clients the server can handle. Or do something like have the server serve up a database and have scripted clients access the database in a real manner. Those are the kinds of benchmarks that really work, but unfortuenatly, they take actual work. As for NT beating Linux, remember that NT is designed to run stuff like this. Though NT is a microkernel, it rusn all its servers in kernel space (though Linux does as well, I think). Also, the kernel has a lot of design concessions that faccilitate a really high I/O rate. There not so good for doing real world tasks, because the OS tends to step on its own toes, but if you testing raw performance, NT usually wins. But these performance enhancements take their toll on stability and ability to handle high loads. For example, in Windows2K, DirectX has some interface calls implemented into the hardware abstraction layer, which really speeds up performance, but at the cost of stability. That said, WindowsNT really IS a decent OS, and some parts are simply better designed than their conterparts in Linux. For the desktop, (if you have the RAM), W2K is perfectly stable (because most desktop users reboot at least once every few days) and nicely supports media. Also, Win2K has a good multi-threaded TCP/IP stack that was rewritten from NT4. Despite its faults (ahem bloat) it does actually have some features that the Linux guys would be wise to look into. (ahem, COM) W2K is nowhere near being the end-all be all of OSs, but neither is Linux. They both have their flaws, and NT has actually improved enough that W2K actually has some uses in the server role! Anything doing small transactions that doesn't need to be particularly stable (for example DNS) would be served well by NT which responds well to little transactions like this.
  • by be-fan ( 61476 ) on Thursday July 06, 2000 @09:03AM (#954670)
    NT "runs all its servers in kernel space"? Do you mean drivers? Services? Services are not run in
    kernel space, althougth they can be set with a high priority. All drivers are run in kernel space,
    though.
    >>>>>>>>
    NT is a microkernel operating system. In microkernel OSs, servers are processes that provide system services such as networking, I/O, graphics, RPC, etc. In some cases, servers even provide memory management. In most microkernel OSs, these servers are in user space. However, in NT, they run in kernel mode. It's true that drivers run in kernel space, but so do the subsystems that load the drivers. This is a significant difference to most microkernels which have servers and large parts of drivers in userspace. BeOS for example has all servers in userspace, and most drivers are loaded by the kernel. IBM's experimental WorkPlace OS, on the other hand, put drivers mostly in userspace and even put services such as paging in user space. This tended to have a performance hit, and NT avoids it by running servers in kernel mode, even though that is riskier.

    "The kernel has a lot of design concessions that faccilitate a really high I/O rate." Really? Have you
    looked at the code for network and storage?
    >>>>>>>>
    No, but I have looked at design documents that detail the NT architecture. NT was designed for VERY high performance I/O.

    I have written network and storage drivers for NT4/
    Win2k and is not designed to be fast. Check out the DDK. Both storage and network use a miniport
    model (SCSI Miniport and NDIS Miniport) with a port driver doing much of the work. To make
    matters worse, Win2k use WDM for its drivers. WDM tends to add an additional driver object to
    the layered model. Both miniport and WDM are designed to be very general and take control away
    from the driver developer. A call to read a few bytes from the disk goes through so many layers.
    First, the file system drivers, then class.sys, then disk.sys, then scsiport.sys, then
    vendorscsiminiport.sys, then hardware. There can also be any number of filter drivers in the mix.
    WDM allows upper and lower filters for each FDO (Functional Device Object). We got a nice
    performance boost by not using the SCSIMiniport/Class driver interface. Win2k is not designed to
    be fast as much as extendable and general.
    >>>>>>>>>>
    Win2K is definately not designed to be extendible and general. While WDM may add a lot of overhead to the driver interface, that is not NT's native driver model. Microsoft added WDM to allow drivers for Win98 to work on NT. Also, you cannot deny that the architecture is tuned more to high performance than generality. A lot of critics of NT complained that the architecture was "academically dirty." Meaning that a lot of design desicions resulted in a faster but less clean system. For example, Windows 2K has DirectX class integrated into the HAL. Very unclean. NT also runs all services in kernel mode. Again, unclean. The NT microkernel globs up a lot of services that should be in the servers, which furthers performance, but makes the microkernel less general and less extendible. It runs the windowing system in kernel space! How general and extendible is THAT? NT does have a lot of management overhead, true. But it is also designed for raw performance. If you're not changing anything (ie. simply streaming data of a disk while not doing anything else) it is really fast.
  • by be-fan ( 61476 ) on Thursday July 06, 2000 @09:13AM (#954671)
    Okay, if you optimize your card (aside from just outright reporting false numbers) so that it runs Quake and Unreal faster, then you just sped up 90% of the games out there that similar algorithms. Even if you just speed up QuakeII, you've sped up all the games based on that engine. Manufacturers can't do that because it requires optimizing the whole driver, which is actually legitimate! Also, real world benchmarks are harder to fudge. If you do something to a driver that makes Quake run faster, than chances are that driver also makes all the other 3D games run faster. That's even more relevant for serving. I proposed a benchmark where the test emulates real-world conditions. IE, the script reflects how a person would actually use the system. If vendors do something to optimize for it, then they'd be optimizing real world usage. That's a good thing.
  • by be-fan ( 61476 ) on Thursday July 06, 2000 @06:10AM (#954672)
    That could also be a problem. Processes take more time to start than a thread does. Also, it still doesn't mitigate the fact that the TCP/IP stack is single threaded so THAT can stall.
  • by Spasemunki ( 63473 ) on Thursday July 06, 2000 @03:20AM (#954673) Homepage
    I don't think that benchmarking is about deciding what is the 'better' operating system (you know, the one with the big 'S' on its chest). Benchmarking for something like this is about testing several products at the same task, and finding which one is better for that task. In that way, benchmarking is a good step in accepting that different OS's are good at different things. Saying that Red Hat 6.1 does worse on an application serving test doesn't mean that it's time for Bill the Gates to dance in his underware shouting "Victory!"; if RH61 had won, it wouldn't mean that it was time to throw the first (or last) shovel full of dirt on top of Windows. All it means is: Windows might be better at the particular task of serving applications. It's about finding those individual strengths, not culling the herd.
    Or at least, it ought to be.

    "Sweet creeping zombie Jesus!"
  • by Entrope ( 68843 ) on Thursday July 06, 2000 @03:16AM (#954674) Homepage
    Indeed. The ServerBench description page claims it's an "application server" benchmark, but it looks like it's something they implemented themselves. This means that it's almost worthless as a test bench -- it's not very representative of *anything* since server performance varies more with the server program than almost anything else.

    If their protocol is simple enough to make it easy to optimize for different platforms (for example, Win32 vs Unix), it's almost certainly too simple to make an interesting test. If it's a complex protocol, I suspect they optimize then Win32 code a lot more than the Linux code.
  • by dingbat_hp ( 98241 ) on Thursday July 06, 2000 @04:09AM (#954675) Homepage

    This whole bench test is pretty useless. Ziff wanted an "application" benchmark that was cross-platform and didn't rely on applications. What they actually built was so content-free that it simply tests network and OS performance as far as the TCP/IP stack.

    Not surprisingly, they found that the Win TCP stack is quicker than the (known to be single-threaded) Linux stack. QEFD.

    I'd like to see better benchmarks, but I'd much rather see something for simple Corba vs. Corba, or Corba vs DCOM. SOAP (the Apache approach of deployable handlers), vs. SOAP (Servlets) vs. SOAP (Microsft's SOAP-on-a-ROPE) would be even more interesting. We're doing something along those lines ourselves - maybe it will be publically publishable.

    To get the alternative "Useless benchmark shows Linux to be faster than Windows" story go here [linuxtoday.com].

  • by Hrunting ( 2191 ) on Thursday July 06, 2000 @03:53AM (#954676) Homepage
    Why is it always Linux vs. Windows?

    Because that's what Linux advocates trump up. Ever since Linux became 'popular', advocates have been pitting it against the big bad evil Microsoft. Nevermind that until recently, Solaris was just as closed-source and dealt in the same underhanded tricks as Microsoft. Nevermind that they're two completely different types of operating systems aimed at two entirely different classes of people.

    Basically, Linux people want Linux to be able to do everything that Windows can. They want it to be a robust server operating system. They want it to be an easy-to-use client operating system. They want it to run everything. They want to be the monopoly (but a monopoly of choice, not of force). Nevermind that Windows 2000 isn't trumped as the OS for everyone and Windows 98 isn't used in high-end server systems (and yet, advocates want Linux to do all of these tasks, and rule the hand-held market as well). And so, we get tests like this, Win2K vs. Linux, when really, what we should be getting is Win2K vs. Solaris (which I'm quite confident would blow Win2K out of the water).

    Does Linux really want to compete at the levels of AIX and Solaris?

    No, they want to compete with Windows. Windows is the enemy. Sound the alarms, and when Windows does something better than Linux, something is seriously wrong with the world (or so they would have you believe). Perhaps what would be a better suite of tests for Linux is one which isn't a comparison test at all, but rather one which looks for deficiencies so that people can start fixing them and quit debating about whether or not a comparison is valid.
  • by ibbey ( 27873 ) on Thursday July 06, 2000 @04:50AM (#954677) Homepage
    Because that's what Linux advocates trump up. Ever since Linux became 'popular', advocates have been pitting it against the big bad evil Microsoft. Nevermind that until recently, Solaris was just as closed-source and dealt in the same underhanded tricks as Microsoft. Nevermind that they're two completely different types of operating systems aimed at two entirely different classes of people.

    Perhaps a less biased way of saying this is "Because Windows is, arguably, the main competition for Linux. While AIX & Solaris are also viewed as competitors, due to Linux' current weakness in scalability, they are not considered direct competitors."

    Now, that said, I'll respond by saying you're an idiot. Linux & Windows 2k ARE NOT designed for two different types of users. Both are designed for general use, high-end workstations low-to-mid end servers. In particular, in the context of the question, they are designed for EXACTLY the same market.

    As far as AIX & Solaris, they are also the competition. But, most people who have the budget to run a high -end unix server have a reason to spend the money (Support, a boss that's an idiot, or a need for specialized capabilities or scalability that Linux & Windows don't allow). Linux is rapidly advancing, & is beginning to address the last two issues (scalability & features), but at present it's hard to directly compare Linux to some of the commercial Unixes. And of course, you again need to consider the context. Since the question was specifically in response to a benchmark comparing Linux to Win2k, why would you even expect AIX or Solaris to be brought up?
  • by Dan Kegel ( 49814 ) on Thursday July 06, 2000 @05:38AM (#954678)
    I have a little writeup on the history of the wake-one fix (and others) at http://www.kegel.com/mindcraft_redux.html [kegel.com]. Looking at Andrea's patch, one important change was

    diff -u linux/net/ipv4/tcp.c:1.1.1.6
    @@ -1575,7 +1575,7 @@
    add_wait_queue(sk->sleep, &wait);
    for (;;) {
    - current->state = TASK_INTERRUPTIBLE;
    + current->state = TASK_INTERRUPTIBLE | TASK_WAKE_ONE;

    Offhand, it looks like that particular change isn't in Red Hat 6.1 or 6.2. I don't know whether this would affect ServerBench performance, though. It's hard to tell without looking at the source.

  • by 1984 ( 56406 ) on Thursday July 06, 2000 @03:40AM (#954679)
    Without going to far into it, I remember discussing a lot of this stuff with the guys doing those tests at the time. Those (fairly) low-down tweaks were attempts to see if the Linux setup was tripping up on something obvious (e.g. trying to auto-negotiate on the NIC) and whether it could be speeded up. That was because everyone was really quite shocked at the figures coming out, and went to some trouble -- including talking to Red Hat -- to attempt to eliminate configuration issues and the like, because everyone thought the numbers looked odd. But after a lot of effort, they still looked odd.

    And you don't (or shouldn't) 'root' for any of the platforms you're testing when you benchmark. You go to a reasonable amount of trouble to make sure that you are testing what you think you are (and not some config hiccup that's hamstringing the results). But having done that, you sometimes still get a surprise. That's what happened here.
  • by levendis ( 67993 ) on Thursday July 06, 2000 @03:17AM (#954680) Homepage
    The server was a dual-proc machine. Win2k has a multi-threaded TCP/IP stack, linux 2.2.x doesn't. That probably accounts for most of the issue right there - at around 24 users, the single processor limitation of the Linux TCP/IP stack was reached, and the Win2k mahcine just split the load up.

    Of course, IANALOLT (I am not Alan Cox or Linus Torvalds), but it seems the most likely explanation to me...
  • by tjwhaynes ( 114792 ) on Thursday July 06, 2000 @03:40AM (#954681)

    This just went up on the TPC website Monday, there is a monster leader in transaction processing price/performance and that is:

    • IBM Netfinity with Intel Xeon processors
    • IBM DB2
    • and Windows 2000.

    You will not believe this unless you see it!

    Yes - but check out the hardware. 32 four-way pentium Xeon's, and over a terabyte of disc space, and an obscene amount of RAM. That is not a standard setup, although it was built with standard parts (trust me - I know the team which built it). That is not to say that the DB2 team isn't extremely pleased with this result :-)

    Just because it's running on Windows 2000 does not automatically mean that there might not be better choices for an OS to support this benchmark. It's not even entirely clear to me that Windows NT might not have been faster here, given the benchmarks which MS put out on their own website showing that Windows 2000 does better in limited memory, but is worse than NT above 128MB (and these machines had a lot more than that). Remember that DB2 UDB has a shared-nothing architecture which that it scales extremely well and is additionally capable of using raw devices so the OS in question may not have a big impact on performance. And DB2 runs on most platforms out there, from OS/2, AIX, HP-UX, Solaris, Linux, Windows 9x/NT/2000, SGI, SCO, Dynix and various 64 bit platforms as well.

    Of course, it would be nice to have some side-by-side benchmarks of DB2 UDB on Windows NT/2000 and DB2 UDB on Linux. There will almost certainly be some benchmarks on Linux sooner or later - since IBM has made Linux available for all its machines, it makes sense to publicise the performance of its flagship DB product on Linux as well.

    Cheers,

    Toby Haynes

    P.S. I work on DB2 UDB development.

  • by Dungeon Dweller ( 134014 ) on Thursday July 06, 2000 @03:19AM (#954682)
    "Each was tested on it's own network, with 2 subnets of 24 servers, the windows network consisted of 48 PII's, whereas the linux network had the added advantage of having a Cray Supercomputer making requests at full charge on the 24th node of the first subnet..."

  • by tilly ( 7530 ) on Thursday July 06, 2000 @05:26AM (#954683)
    I hate people talking how 2.4 will fix everything, 2.2 surely didn't.

    Where did I say that 2.4 will fix everything?

    I said that there is a specific problem, known in 2.2 that has turned up before, that is a potential explanation for this bad result.

    There are other known (and fixed) scheduler problems.

    Encountering any combination of these in 2.2 benchmarks is to be expected. Don't make these out to be more or less than indications that 2.2 had some obvious room for improvement.

    I am sure that 2.4 will have more problems. However many problems that turned up in benchmarking 2.2 have been fixed (because they turned up in benchmarking 2.2), and preliminary benchmarks of 2.4 (eg the recent SpecWeb result where it nearly tripled Windows 2000 on a similar 4 CPU box) indicate this.

    Now will 2.4 be ready for the enterprise, as they like to say? Not really. First of all until it has been through a few point releases, I would expect some significant bugs. (To be expected in any software.) Aside from that issue, it lacks many managability tools, a volume manager, more work needs to be done on failover, journaling filesystems are needed, etc. I have been convinced by Larry McVoy's argument that further work on SMP is not needed, NUMA (done through clustering and virtual operating systems) is.

    These are known problems. Work is being done on them. However there will be room for complaint about Linux vs more mature systems for some time to come. However problems are getting solved, and Linux is moving up the food chain, fast.

    Regards,
    Ben
  • by tilly ( 7530 ) on Thursday July 06, 2000 @03:12AM (#954684)
    An immediate thought.

    The "thundering herd" problem that was identified in Mindcraft and fixed in 2.4, isn't that still present in RedHat 6.1? (BTW calling it "Linux 6.1" really irritated me.) That could explain a sudden drop-off. It is not a problem, not a problem, then suddenly becomes a problem and as soon as you get a slow-down, you get a real traffic jam.

    Just guessing...

    Ben
  • by MosesJones ( 55544 ) on Thursday July 06, 2000 @03:17AM (#954685) Homepage

    I know this sounds strange but when I'm looking at designing a high transaction application or site I don't even LOOK at Windows or Linux. Does it suprise me that Linux doesn't scale to the enterprise market ? No, its written by individuals for lowish demand systems that they require, rather than by Company A who is implementing for Company B something that will cost several million pounds of development.

    These sort of tests are IMO unfair to Linux. Should you use NT/W2K or Linux for your high transaction application/site ? The choice is more normally "Should I use True64, AIX or Solaris ?".

    Linux works great for me as a webserver, as a client who takes a limited number of hits at a cheap price. If you want to scale you buy more boxes.

    On the back end use a large end server with lots of RAM that has a massive IO throughput.

    Does Linux really want to compete at the levels of AIX and Solaris ? Why not go for the niche, of cheap, reliable, and easy to scale horizontally.
  • by blakestah ( 91866 ) <blakestah@gmail.com> on Thursday July 06, 2000 @04:22AM (#954686) Homepage
    The "thundering herd" problem that was identified in Mindcraft and fixed in 2.4, isn't that still present in RedHat 6.1? (BTW calling it "Linux 6.1" really irritated me.) That could explain a sudden drop-off. It is not a problem, not a problem, then suddenly becomes a problem and as soon as you get a slow-down, you get a real traffic jam.

    Yeah, the box was dual cpu and dual ethernet card, designed to show the weaknesses of linux networking as of the 2.2 kernels.

    However, as more recent benchmarks show, [slashdot.org]the soon to be released TUX package (from Redhat, GPLd) does extremely well in multi-cpu multi-ethernet card environments. These changes are likely to become embedded in Apache.

    I'd be really surprised if anyone has an x86 OS that could beat the one Ingo Molnar set up for the SpecBench tests. It more than tripled the Windows machine under unrealistically high loads with flat file service - 4 CPUs, 4 Gigabit ethernet cards.

    There are also issues about scheduling for high loads such as the one in the ZDNet article that have been addressed by a patch from IBM.
  • by rothwell ( 204975 ) on Thursday July 06, 2000 @04:26AM (#954687) Homepage
    Okay, they set the ethercard to full duplex and increased the queue depth on the scsi card. Fine -- makes sense. Stopped the "atime" updates. Makes sense.

    But they also did this:
    echo 100 5000 640 2560 150 30000 5000 1884 2 >/proc/sys/vm/bdflush

    ... interesting. We're developing a new filesystem, and ended up ignoring bdflush completely to get good performance. Here's what those values mean:

    From /usr/src/linux/Documentation/filesystems/proc.txt:

    Table 2-2: Parameters in /proc/sys/vm/bdflush
    Value (default/tweaked)

    nfract (40/100)
    Percentage of buffer cache dirty to activate bdflush

    ndirty (500/5000)
    Maximum number of dirty blocks to write out per wake-cycle

    nrefill (64/640)
    Number of clean buffers to try to obtain each time we call refill

    nref_dirt (256/2560)
    buffer threshold for activating bdflush when trying to refill buffers.

    dummy (500/150)
    Unused

    age_buffer (3000/30000)
    Time for normal buffer to age before we flush it

    age_super (500/5000)
    Time for superblock to age before we flush it

    dummy (1884/1884)
    Unused

    dummy (2/2)
    Unused

    ... they seem to have changed one of the "dummy" values... wonder why? Other than that, they appear to have increased the interval at which bdflush runs, meaning more stuff is hanging around in memory. It may be that at 24 clients, bdflush is banging on the filesystem too much. I would loveto see a graph of disk activity inluded with the results. Sometimes Linux will go through a silent-storm-silent-storm cycle as bdflush runs on a busy system. It would be interesting to see how a journaled filesystem would perform. I think Reiser does his own buffer-flushing rather than relying on bdflush runs to do it, meaning he has finer control over it. It would also be interesting to see this test run on FreeBSD, which does a better job keeping the disks busy.

    Tweakers may also be interested in reading /usr/src/linux/Documentation/IRQ-affinity.txt ... describes how to have specific CPUs handle specific IRQs -- like the mindcraft tests did with NT.
  • by Ingo Molnar ( 206899 ) on Thursday July 06, 2000 @03:21AM (#954688) Homepage

    ServerBench is not available in source code, and the testing was done by ZDNet. From what i know about ServerBench it uses a threaded IO model on NT, but a fork/process model on Linux. The Linux 'solution' is coded by ZDNet, with no possibility from us to influence/comment the design and approach used at all. Even under these circumstances we expect the 2.4 Linux kernel to perform significantly better in ServerBench than 2.2 kernels. The 2.2.1x (and late 2.3.x) kernels had some VM problems, and with increasing VM utilization (more clients) this problem could have been triggered.

    SPECweb99 OTOH is a standardized benchmark with full source-code access (ServerBench are closed binaries), so all SPECweb99 implementational details are visible.

    Nevertheless it's technically possible that ServerBench triggers performance bugs in Linux - we'd love to see the source to fix those bugs ASAP, if they are still present in 2.4.

No man is an island if he's on at least one mailing list.

Working...