Are Linux Transactions Slower Than Win2k's? 218
FullClip asks: "In the July issue of PC Magazine, Red Hat Professional is compared to Windows NT/2000 on basis of ServerBench, which tests the maximum Transactions Per Second (TPS) for a given number of clients. Red Hat 6.1 (when tweaked) matched the performance of Windows, but showed a terrible decrease in performance at about 24 clients to a weeping 20 % of the level that Windows was able to maintain. Somehow this disturbs me.
Doesn't Linux perform better than that bad in client-server environments? If someone can point me to an non-FUD benchmark site, it would be appreciated..." Is this yet another case where benchmarks have been skewed severely to show a deficiency that doesn't exist? Or is this another area where Linux needs improvement? [Updated 6 July 2000 2:15 GMT by timothy] You may want to compare this with the far different results reported by SpecWeb.
Re:I think I get it (Score:1)
Re:2.2 kernels used (Score:1)
I've never looked, but (given the age of the manpages) I wouldn't be surprised if this code isn't heavily optimized. I suspect this because, in my experience, very few free software programmers actually use these constructs (portability??). As a result, there wouldn't be a huge push for fast sV IPC.
Furthermore, I wonder about scalability of IPCs. A long time ago I ported a program (making heavy use of IPC) from AIX to Pyramid's flavor of Unix (does anyone remember what it was called?). The kernel default message queue size only allowed 8 messages while AIX allowed 256. As you might expect, this created a minor scalability issue.
Benchmarks (Score:1)
My Personal favoritre was the Mindcraft tests, simply because in the end Mindcraft PROVED that it was a Linux problem. And they are still blasted for that. Thankfully Red Had did something about it and are finally getting the benchmarks Linux shoud have got in the first place, but it would have never happened if they did what 99% of the Linux community did and blamed Mindcraft.
Personally, going into a lull and believing that a particular software is untouchable is a disaster waiting to happen, because eventually its going to be overtaken by a competitor unless something is done about it.
-Slashdot: News for Linux. Nothing Else Matters.
Re:I think I get it (Score:1)
No it wasn't.
It was a dual-cpu capable machine (as all E60's are) but only contained one CPU hence it wan't a mutil-threaded stack issue.
Documentation in Word format? (Score:1)
Check this out.
It's a SELF EXTRACTING WINDOWS BINARY ARCHIVE.
Fortunately, it's a self extracting windows binary ZIP archive, so I used "unzip" to uncompress it.
But then I tried to open it...
Makes AbiWord crash.
mswordview says "this is an unsupported word 7 doc, sorry
this converter is solely for word8 at the moment
"
which I found weirdly funny.
I finally got it opened with StarOffice 5.2, but gee.. what a pain.
How about plain text, or HTML guys!
Also, I couldn't get to thier license agreement link.
Re:Troll Alert! (Score:1)
Re:For CRYING OUT LOUD.. (Score:1)
Re:Why is it alwasy Linux v Windows... (Score:1)
so far, so good. But then:
>They want it to be a robust server operating
>system. They want it to be an easy-to-use client >operating system. They want it to run everything.
So which one of these is Windows supposed to be able to do?
For CRYING OUT LOUD.. (Score:1)
Do you guys/gals know how *BAD* we as a cummunity look when every time we're beaten on a test, we cry FOUL??
There will end up being a reason as to why this happened, but for crying out loud guys..
Re:Another myth disposed of (Score:1)
NT and security are two things that don't match. http://www.securityfocus.com/vdb/stats. html [securityfocus.com]
6.1? (Score:1)
Sheesh.
Re:Benchmarks skewed (Score:1)
I've never argued that Linux was perfect, but the nice thing about it is that when there's a deficiency in Linux, it can get fixed rather quickly. I guess a problem could get fixed rather quickly in Windows, too, but one would have to wait for Microsoft to A) admit the problem was there and B) put out a free fix for it. With Linux, odds are that someone will come out with a fix very quickly.
If memory serves, Linux had a patch for teardrop in 48 hours or something like that, and it took Microsoft a couple of months. That seems awfully long on Microsoft's end, but that's what I heard.
Re:Just The Other Day (Score:1)
Yes, but the basic OS is still only single user, unless you're prepared to spend bucketloads extra on Terminal Server or Citrix stuff.
SPEC Test is using 2.4 kernel, etc... (Score:1)
Anyhow, the SPECweb figures are due to a machine that appears to have been running the pre 2.4 kernel, the Red Hat RawHide, and a nifty little high-performance web server called TUX that they GPLed that seems to outperform most of the stuff out there. Tidbit about TUX- it's a kernel extention not unlike the KNFS server.
Re:Just The Other Day (Score:1)
--
Do you all so quickly forget? (Score:1)
Everyone is looking at this the wrong way. Why is everyone so concerned about speed? Are you aware that speed is not the only issue on computers? Gosh, the fact that Microsoft may have done something right somewhere is not that hard to believe. Why does everyone look at speed so much? Things like this make people forget why they chose Linux in the first place. That reason is freedom. That's the advantage over other systems. Having this freedom _usually_ produces better code, but it doesn't always. So what? With our freedom, we don't have to worry if RedHat puts out a better/worse system than Microsoft. If RedHat/Linux sucks, we can hire someone to modify it so it doesn't suck so much. And, if those modifications are worthwhile enough, we can sell them ourselves. The reasons most people choose Linux are freedom, openness, freedom from being vendor-bound, and customizability. Because of these things, we usually have scalability, performance, and security as well. But those are not the important issues. The fact is, you have the freedom to do things with Linux that you could _never_ dream of doing with Windows simply for legal reasons. This benchmark may be skewed. It may not. It may even be skewed in favor of Linux. But the fact is, I have my freedom, they do not. And my freedom is not something I deal lightly with.
Re:Do you all so quickly forget? (Score:1)
Re:Another myth disposed of (Score:1)
Scan down a ways, you'll notice that for 2000, the top two are Windows 2000 and Windows NT. for 1999, the top 12 are all Windows (NT, 98, 95, IE, etc). Of particular note is that Windows doesn't show up at all in the 97 & 98 lists. Somehow I don't think it suddenly broke. I suspect it wasn't being reported on bugtraq. Just because it isn't reported doesn't mean it's not vulnerable.
On your second paragraph, I agree.
Re:IBM/Intel/MS rock TPC-C (Score:1)
And I'd also trust Bea Systems' Tuxedo for a TPM than I would M$'s COM+. Let's just say I'd rather use something battle tested when my job is on the line. (Yes I do use Linux at work, just not in my production critical back end database)
Then they can't use NT or W2k (Score:1)
So what do they make of better-than-quarterly "Service Packs", some of which break their systems and lack of which leave "rape me" signs up on each network interface?
Betcha the results are miles apart if done with, say, Mandrake 7.1 - and another quantum leap if you plug a 2.4.0pre kernel in. In short, by the end of the year, nothing Microsoft does will bring those tail-lights any closer.
Yup, that's ZDnet (Score:1)
The gentle swishing noise of reality vanishing out the door.
throwing up [...] is cake on IIS
Amazing what a bit of editing can do. (-: Sorry... back to the plot...
throwing up a simple vb or c++ COM object for scalability is cake on IIS
Whereas on Zope or PHP, the better design makes such kludges pointless?
Re:Another myth disposed of (Score:1)
>operating system in the next 6 months that has no
>bugs at all, is as fast as hell, and has a 500kB
>footprint! So much for your Linux!
Nonsense! That Microsoft Linux version was an
april fool joke!
Re:Why is it alwasy Linux v Windows... (Score:1)
"[ ok ]", "[ wait ]", or "[ fail ]"
all wrapped up in a pretty GUI. HP did it in text but Caldera's would NOT scare any MS Windows user.
The login screen is pretty straight forward though many ludites.... I mean Window-ites wouldn't understand what they were logining in for.
IMHO
Funny though that at my July 4th party, 2 friends who recently bought computers said they paid $1500 for email and web browsing. They told me to shut up about the $99 IOpener I had showing photos of past events.....
Re:Respect where respect is due (Score:1)
Yeah, it's supposed to be a POSIX compliant system. Maybe we should call it X/Open Linux or something ? (-;
Remember, without a great visionary such as RMS as our leader, there wouldn't be any Free (as in speech) Software and the world would be a much worse place.
We'd have BSD and its associated license and variants ( like the artistic license ) with or without the "free" software foundation.
For Pete's sake... (Score:1)
Re:6.1? (Score:1)
Re:I think I get it (Score:1)
Context switches between processes take longer than switching between threads, of course, but the difference is far less under *nixes than under Windows. Interestingly enough, Win2K has much better process-switching optimization than previous versions and MS is now talking about (may have even released - I don't follow them that closely) a version of IIS that runs multiprocess instead of multithreaded to improve stability.
But yes, you're right - none of this does anything about the TCP/IP stack.
Re:IBM/Intel/MS rock TPC-C (Score:1)
Yes, I know... I found this out on the Register.
But it looks like IBM's toasted those disqualified numbers anyhow... cool!
Re:Just The Other Day (Score:1)
Well, that explains the relative levels of stability and security of the two very nicely. 20 years (actually more) of proven technology == good, reinventing the wheel based on DOS == bad. Thanks for summing up :)
Re:Just The Other Day (Score:1)
In general the Linux model is to make forking a new process extremely cheap, so that you can just use new processes rather than threads. Threading hasn't been there since the beginning because there hasn't been as much of a need for it.
Re:Just The Other Day (Score:1)
The point is not "stability above all else", the point is that in OS design as in other things, you can either learn from the mistakes of the past or repeat them yourself. There's no reason to sacrifice stability just for a snazzy user interface, and there's no reason to sacrifice security just to get singing and dancing attachments in the mail. That's why 20+ years of design and industrial usage of an OS (if you consider all of Unix as one OS, which is a vast simplification) is a good thing, not a bad thing.
Hard to believe (Score:1)
http://www.zdnet .com/eweek/stories/general/0,11011,2290989,00.html [zdnet.com]
It basically shows that a C++ built com+ object running on Win2K smoked away every other platform (hard to believe I know..) Doesn't make a whole lot of sense but this seems to be coming from a good source.
I think MS has been doing a good job putting up some competition in the web server market. Apache is nice, but it's so simple to get ASP up and running on IIS.. (granted, PHP on Apache isn't bad either but throwing up a simple vb or c++ COM object for scalability is cake on IIS).
My view of the tests (Score:1)
How much of the test relied on static pages how much of the test was dynamic pages?
There is a Linux kernel module to speed downloading static pages. It's just a remote file copy and there ends up being additional overhead when a user space application (Web server) dose the work. The same module passes dynamic pages to the server.
Linux however dose a decent job on dynamic pages.
If your website is graphics heavy then your proformence for Linux should be less than NT but if you have a lot of CGI,PHP etc type content you should preform better than NT.
The number of users (at once)... Can Linux or NT be trusted to such a high load environment? I'd think if you get 20 or more hits at a given moment you'd want to consider Solarus or other high end system.
To make it clear... It should take about 2 seconds (at worst) to get a page out. 20 people at once. So say half that for a normal load.
10 for every 2 seconds. 5 per second.
5 * 60 = 300
300 pages a minute.
18,000 pages an hour
18,000 for 8 hours a day = 144,000
[8 hours instead of 24 becouse people sleep]
Thats a lot of traffic...
My math may be a bit off but even with a few mistakes and bad assumptions you are dealing with heavy load when you expect a system to serve off 20 pages at a given moment.
if your dealing with an odd burst the system should be able to handle it given no new traffic.
who cares? (Score:1)
RAM woes could be there in this case (Score:1)
Possible reason: the I/O model? (Score:2)
1) Blocking I/O (used with threads)
2) Nonblocking with select/poll/something else
3) Crappy POSIX aio_* functions (does Linux even support these?)
Windows NT has blocking and nonblocking of course, and has what they call asyncronous sockets that work based on a message queue, but it also has what is called I/O completion ports, which use overlapped I/O. I'm not an expert on these models (yet) but they are the de facto model on NT to support thousands of concurrent connections. It uses a mixture of threads and asyncronous operations (not message queue based this time) so, say, 64 clients are handled by one thread. As apposed to Apache fork()ing whenever a new connection comes in. Yuck.
I've searched for info on making a similar I/O model on linux and have come up with a few references to IOCP on the linux kernel mailing list, but it doesn't seem to have gone anywhere.
If someone could share more information... please do so.
khaladan
Stolen from the best (Score:2)
...phil
Re:I think I get it (Score:2)
Give us a break.. (Score:2)
Is this yet another case where benchmarks have been skewed severely to show a deficiency that doesn't exist? Or is this another area where Linux needs improvement?
We'll never really know, but let's have a purile 400 message discussion* while we don't find out!
* In a very loose sense of the word
Re:Just The Other Day (Score:2)
That'll be because your mind is shut. I find it easy to believe that WindowsNT, that has been a multitasking _multithreaded_ operating system for years and years is now smoking the hell out of Linux that barely supported threads at all until not too long ago.
Re:For CRYING OUT LOUD.. (Score:2)
Uh, what sort of client is it? (Score:2)
What sort of client/server are they testing? HTTP? SMB? FTP? SMTP? POP3? I can't see anything which specifies this... and until that's known, nobody can comment on the results.
--
Re:Just The Other Day (Score:2)
The NT kernel itself may be wonderful, but nobody's seen it since the Microsoft Backwards Compatiblity Dumptruck unloaded Win32 all over it.
This isn't meant to be a troll. Windows NT would be a far better platform if they'd just drop the "Windows is part of the core OS" part of it. Put it in user space completely.
(In other words, I agree with you, I'm just expounding.)
--Joe--
Re:Why is it alwasy Linux v Windows... (Score:2)
Anyway, it comes down to marketing and Linux IS the competition for the Microsoft Marketing Company....
IMHO
Re:Just The Other Day (Score:2)
--
SCSI driver as a module (Score:2)
Just out of curiosity: Did anyone else get the impression that the RedHat system was configured to handle the SCSI adapter using a loadable module? Is there a performance disadvantage to running your SCSI drives using the loadable module as opposed to having the driver resident in the kernel? I would have thought that they'd rebuild the kernel and include the driver in the kernel.
Also, why not use a Pentium optimized distribution, like Mandrake, instead of the generic 386 oriented RedHat? All these magazine testers seem to do this; apparently when they think of Linux the gears in their heads turn only once and they come up with RedHat (no offense, RH, I use your stuff and am happy with it).
--
Re:I think I get it (Score:2)
Although Apache is single-threaded (prior to 2.0, when multithread becomes an option), each request is run in a separate process. If one request stalls, it will still give up its CPU time for other tasks just as nicely as it would under a multithreaded Windows server.
Re:Just The Other Day (Score:2)
Re:The Magic Number... 20 (Score:2)
"Cheating" is not good... (Score:2)
If for example Matrox (or anyone else) cheats in their video-drivers, to get better frame rates in Quake,
this could lead to instability, graphical glitches etc.
Oh so ZDnet is more serious? (Score:2)
a more serious publication than Linux Journal?
Why do you even make the assumption that ZDnet
is less biased?
ZDnet is a huge publicationcompany that makes about 95% of it's computermagazine-income from
selling windows-mags.
PC Magazine (is it the largest?) is around 99% windows and 1% other. It is obviously much less biased than Linux Today.
Some of their articles are created with the sole purpose of angering Slashdot-readers, thus getting
plenty of readers (because you just have to read
the garbage), and generating income.
Re:Uh, what sort of client is it? (Score:2)
Re:Uh, what sort of client is it? (Score:2)
All looks a bit dodgy to me.
Re:NT was designed to do this. (Score:2)
No matter what benchmarks exist, _most_ vendors will to some degree optimize to those benchmarks. This is as true of Quake3 or your COM+ example or SPEC or AIM as it is of WinBench3D. Interestingly, the easier it is for the vendors to look at and understand how a benchmark works, the more specific their optimizations will be, and the least subvertible benchmark would be one where the vendors have _no idea_ what it'll do until it's run. I know that open-sourcers aren't going to like that idea, but there it is.
Re:what were they doing when changing specs? (Score:2)
Interesting. Everyone was on Mindcraft's case for _not_ involving Linux vendors in tuning efforts. Now they go out of their way to do so, and everyone's saying it looks suspicious. While everyone's talking about rigged benchmarks, how about analysis that ends up with one side accused of cheating no matter what they do?
Not real-world at all (Score:2)
Some food for thought: what is a transaction? Their FAQ doesn't seem to cover exactly what it is. If all they're testing is static page serving, it's about as useful a test as seeing how fast it can delete files....
Re:NT was designed to do this. (Score:2)
If the hal (much smaller in win2k), pci, disk, class, scsiport, ndis, etc. are all WDM compliant. WDM is the native model for Win2k. The comparison to Win32 is not correct. While a Win32 call is really just a wrapper for the native nt call, WDM is usually not a wrapper for legacy calls. It is often the opposite; most legacy functions are just macros that call the new WDM functions.
The main points of WDM on Win2k are cross platform design (and binary compatibility in many cases) and bringing power managment and plug and play to nt. PM and Pnp are considered very important at MS; they are not just political marketing ideas. Although this new model helps the new developer making products for 9x and win2k, it hurts current nt developers by forcing them to rewrite their drivers. In many ways, WDM actually slowed the developement of drivers for Win2k. Finding classes on a moving spec was a difficult task.
My main issue where Win2k is not tuned for performance involve the use of so many general drivers to handle whole classes of devices but none of them well. IMHO MS would have been better off supporting I2O like almost every other OS does. In order to get the best performance, any storage developer will tell you that you have to write a full port drivers to replace scsiport/miniport and disk/class.
NDIS is similar. In fact, it is worse because MS will not certify a non-NDIS network driver. Here, the developer is forced into a slow model. NDIS 5 addressed some issues (removal of some locks, some off load support) but left many others.
It is this push towards a miniport model that I find hurts performance. Also, the messaging scheme for drivers requires system calls between each layer instead of using a direct function pointer interface. Almost all messages must go all the way through the stack, even when most drivers just blindly pass the call down. While this allows any driver or filter on the stack to change things, it slows performance. Many messages must be handled "on the way up" the driver stack. Whole stacks have to wait on events to be triggered and callbacks called. Microsoft gets extendability through the support of upper and lower filter drivers at the expense of performance.
A faster model would reduce the layers in the device stack, use direct function calls rather than system calls for message passing and reduce the need for callbacks and event waiting.
As for reading NT 3.0 documentation...well that's great for discussing NT 3.0. Win2k is their current model and it has changed in ways that are more than just wrapper functions and cosmetic changes.
Re:NT was designed to do this. (Score:2)
"The kernel has a lot of design concessions that faccilitate a really high I/O rate." Really? Have you looked at the code for network and storage? I have written network and storage drivers for NT4/Win2k and is not designed to be fast. Check out the DDK. Both storage and network use a miniport model (SCSI Miniport and NDIS Miniport) with a port driver doing much of the work. To make matters worse, Win2k use WDM for its drivers. WDM tends to add an additional driver object to the layered model. Both miniport and WDM are designed to be very general and take control away from the driver developer. A call to read a few bytes from the disk goes through so many layers. First, the file system drivers, then class.sys, then disk.sys, then scsiport.sys, then vendorscsiminiport.sys, then hardware. There can also be any number of filter drivers in the mix. WDM allows upper and lower filters for each FDO (Functional Device Object). We got a nice performance boost by not using the SCSIMiniport/Class driver interface. Win2k is not designed to be fast as much as extendable and general.
Just my $.02...
Re:IBM/Intel/MS rock TPC-C (Score:2)
The vendors usually use the latest possible software (unless it has performance issues!).
Re:I think I get it (Score:2)
Those bitching about RedHat 6.1 (Score:2)
Re:NT was designed to do this. (Score:2)
Having high-performance I/O is a good thing. I think you misunderstand me, I really do like NT, in fact I use it about as often as I use BeOS. I was responding to your comment that NT was designed to be extendible over being high-performance. As for my backing up that NT is tuned more for performance than flexibility, I kind of explained that in the next paragraph, (running services in kernel mode, DirectX in the HAL, etc.) Even if WDM is the "official" driver API for NT, it is still not the native API. Win32 is the official API for NT, however it is not the native API. If you really want to see your applications perform as well as they can you'd use standard NT drivers, and the NT Native API. Neither are sanctioned by MS, but this is a technical discussion, not a political one. The truth is, that MS had to heavily endorse WDM because it needed drivers for NT. Even if it isn't the fastest way to do it on NT, that's the one they had to support. And I think the whole point of the WDM is that drivers should be cross compatible, except for video drivers (which aren't written to the WDM.) I think integrating DirectX into the HAL is a good idea. However, it is quite a complex system for something in the HAL, and does introduce bugs and make the system less "clean" from an acedemic point of view. I was using this point to support my assertion that NT is more tuned for performance then generality. Instead of adding a more general HAL bypass system, they chose to simply allow DirectX to pass through. NT does run a lot of services in kernel mode. In other microkernel OSs, there is no executive. Stuff that is in the NT executive (like I/O managers, etc) traditionally run in user mode. However, by running these in kernel mode, NT gains a performance increase. Also, that is one reason why NT blurs the lines between a monolithic and micro kernel. And I'm pretty sure NT is a microkernel. NT was designed as a microkernel, and its subsystems communicate by passing messages. (If it walks and quacks like a microkernel...) However, the design concessions (for performance) that MS made, made it much more of the "macrokernel" that MS holds it is today. However, if you read the documentation from NT 3.0 or 3.1, you'll see that it is billed as a microkernel.
Re:NT was designed to do this. (Score:2)
Re:"Cheating" is not good... (Score:2)
No, no, no. (Score:2)
throwing up [h]is cake on IIS
Re:IBM/Intel/MS rock TPC-C (Score:2)
--
Cheers
Odd... (Score:2)
Re:Just The Other Day (Score:2)
The Unix community has been doing the "multiuser, multitasking thing" for many years. And for some of those years some developers have actively been seeking the best performance possible. (And at other times, especially earlier in Unix's history, people have been working toward the "small is beautiful" goal more than looking toward developing high performance environments.) Linux has been around for a few of those years, but since it is a reimplemntation it might not be on par with every performance tweak of every Unix developed ever.
You can't say that since Unix is a mature platform with lots of work has been done to make various versions of Unix very efficient, that Linux, a reimplementation of Unix, should be very efficient as well.
Re:Hard to believe (Score:2)
Microsoft also used different Web server and database server software [and hardware] than other vendors, so results for Microsoft are not comparable to the other results.
Hrmm... that casts a different light on things, IMHO.IBM/Intel/MS rock TPC-C (Score:2)
IBM Netfinity with Intel Xeon processors
IBM DB2
and Windows 2000.
You will not believe this unless you see it!
[tpc.org]
Read'em and weep
ROTFL!!! (Score:2)
Am I allowed to laugh about this? :)
If we just give the advocates/zealots/other craptalking people a separate forum, we can then go on with talking about Stuff That Matters(tm) over HERE. Thank you.
--
win2k v. win NT 4 (Score:2)
Newer does not always mean better, but it does mean that you lack all the special features
Link (Score:2)
The Magic Number... 20 (Score:2)
This really makes little sense. Also, what about other Linux distros, is RedHat optimized in some way that causes this?
I can hardly see why Linux would just go apeshit once 20 users is met. I mean, you think that you'd see more of a slight downward progression than the floor falling out.
Applications (Score:2)
Re:6.1? (Pedantic mode on) (Score:2)
Skewed Benchmarks (Score:2)
Re:6.1? (Score:3)
That is the biggest problem with the fast pace of Linux upgrades, vendors don't have the luxury of 20 billion bug breakers for their code, they have to spend lots of time verifying that their code works against any upgrades.
And this is why they used Win2000 instead of WinNT4?
If you're gonna pit the top of one and the middle of the next, I ain't even gonna look at your benchmark. They used Win2k, so (in my mind) should have used Linux kernel 2.4.1-prewhateveritistoday.
Shocking new benchmarks (Score:3)
Re:F to the U to the D (Score:3)
NT was designed to do this. (Score:3)
Re:NT was designed to do this. (Score:3)
kernel space, althougth they can be set with a high priority. All drivers are run in kernel space,
though.
>>>>>>>>
NT is a microkernel operating system. In microkernel OSs, servers are processes that provide system services such as networking, I/O, graphics, RPC, etc. In some cases, servers even provide memory management. In most microkernel OSs, these servers are in user space. However, in NT, they run in kernel mode. It's true that drivers run in kernel space, but so do the subsystems that load the drivers. This is a significant difference to most microkernels which have servers and large parts of drivers in userspace. BeOS for example has all servers in userspace, and most drivers are loaded by the kernel. IBM's experimental WorkPlace OS, on the other hand, put drivers mostly in userspace and even put services such as paging in user space. This tended to have a performance hit, and NT avoids it by running servers in kernel mode, even though that is riskier.
"The kernel has a lot of design concessions that faccilitate a really high I/O rate." Really? Have you
looked at the code for network and storage?
>>>>>>>>
No, but I have looked at design documents that detail the NT architecture. NT was designed for VERY high performance I/O.
I have written network and storage drivers for NT4/
Win2k and is not designed to be fast. Check out the DDK. Both storage and network use a miniport
model (SCSI Miniport and NDIS Miniport) with a port driver doing much of the work. To make
matters worse, Win2k use WDM for its drivers. WDM tends to add an additional driver object to
the layered model. Both miniport and WDM are designed to be very general and take control away
from the driver developer. A call to read a few bytes from the disk goes through so many layers.
First, the file system drivers, then class.sys, then disk.sys, then scsiport.sys, then
vendorscsiminiport.sys, then hardware. There can also be any number of filter drivers in the mix.
WDM allows upper and lower filters for each FDO (Functional Device Object). We got a nice
performance boost by not using the SCSIMiniport/Class driver interface. Win2k is not designed to
be fast as much as extendable and general.
>>>>>>>>>>
Win2K is definately not designed to be extendible and general. While WDM may add a lot of overhead to the driver interface, that is not NT's native driver model. Microsoft added WDM to allow drivers for Win98 to work on NT. Also, you cannot deny that the architecture is tuned more to high performance than generality. A lot of critics of NT complained that the architecture was "academically dirty." Meaning that a lot of design desicions resulted in a faster but less clean system. For example, Windows 2K has DirectX class integrated into the HAL. Very unclean. NT also runs all services in kernel mode. Again, unclean. The NT microkernel globs up a lot of services that should be in the servers, which furthers performance, but makes the microkernel less general and less extendible. It runs the windowing system in kernel space! How general and extendible is THAT? NT does have a lot of management overhead, true. But it is also designed for raw performance. If you're not changing anything (ie. simply streaming data of a disk while not doing anything else) it is really fast.
Re:NT was designed to do this. (Score:3)
Re:I think I get it (Score:3)
Re:Aren't all benchmarks subjective? (Score:3)
Or at least, it ought to be.
"Sweet creeping zombie Jesus!"
Re:Uh, what sort of client is it? (Score:3)
If their protocol is simple enough to make it easy to optimize for different platforms (for example, Win32 vs Unix), it's almost certainly too simple to make an interesting test. If it's a complex protocol, I suspect they optimize then Win32 code a lot more than the Linux code.
Application and non-Application Benchmarks (Score:3)
This whole bench test is pretty useless. Ziff wanted an "application" benchmark that was cross-platform and didn't rely on applications. What they actually built was so content-free that it simply tests network and OS performance as far as the TCP/IP stack.
Not surprisingly, they found that the Win TCP stack is quicker than the (known to be single-threaded) Linux stack. QEFD.
I'd like to see better benchmarks, but I'd much rather see something for simple Corba vs. Corba, or Corba vs DCOM. SOAP (the Apache approach of deployable handlers), vs. SOAP (Servlets) vs. SOAP (Microsft's SOAP-on-a-ROPE) would be even more interesting. We're doing something along those lines ourselves - maybe it will be publically publishable.
To get the alternative "Useless benchmark shows Linux to be faster than Windows" story go here [linuxtoday.com].
Re:Why is it alwasy Linux v Windows... (Score:4)
Because that's what Linux advocates trump up. Ever since Linux became 'popular', advocates have been pitting it against the big bad evil Microsoft. Nevermind that until recently, Solaris was just as closed-source and dealt in the same underhanded tricks as Microsoft. Nevermind that they're two completely different types of operating systems aimed at two entirely different classes of people.
Basically, Linux people want Linux to be able to do everything that Windows can. They want it to be a robust server operating system. They want it to be an easy-to-use client operating system. They want it to run everything. They want to be the monopoly (but a monopoly of choice, not of force). Nevermind that Windows 2000 isn't trumped as the OS for everyone and Windows 98 isn't used in high-end server systems (and yet, advocates want Linux to do all of these tasks, and rule the hand-held market as well). And so, we get tests like this, Win2K vs. Linux, when really, what we should be getting is Win2K vs. Solaris (which I'm quite confident would blow Win2K out of the water).
Does Linux really want to compete at the levels of AIX and Solaris?
No, they want to compete with Windows. Windows is the enemy. Sound the alarms, and when Windows does something better than Linux, something is seriously wrong with the world (or so they would have you believe). Perhaps what would be a better suite of tests for Linux is one which isn't a comparison test at all, but rather one which looks for deficiencies so that people can start fixing them and quit debating about whether or not a comparison is valid.
Re:Why is it alwasy Linux v Windows... (Score:4)
Perhaps a less biased way of saying this is "Because Windows is, arguably, the main competition for Linux. While AIX & Solaris are also viewed as competitors, due to Linux' current weakness in scalability, they are not considered direct competitors."
Now, that said, I'll respond by saying you're an idiot. Linux & Windows 2k ARE NOT designed for two different types of users. Both are designed for general use, high-end workstations low-to-mid end servers. In particular, in the context of the question, they are designed for EXACTLY the same market.
As far as AIX & Solaris, they are also the competition. But, most people who have the budget to run a high -end unix server have a reason to spend the money (Support, a boss that's an idiot, or a need for specialized capabilities or scalability that Linux & Windows don't allow). Linux is rapidly advancing, & is beginning to address the last two issues (scalability & features), but at present it's hard to directly compare Linux to some of the commercial Unixes. And of course, you again need to consider the context. Since the question was specifically in response to a benchmark comparing Linux to Win2k, why would you even expect AIX or Solaris to be brought up?
Re:Mindcraft issues still? (Score:4)
diff -u linux/net/ipv4/tcp.c:1.1.1.6
@@ -1575,7 +1575,7 @@
add_wait_queue(sk->sleep, &wait);
for (;;) {
- current->state = TASK_INTERRUPTIBLE;
+ current->state = TASK_INTERRUPTIBLE | TASK_WAKE_ONE;
Offhand, it looks like that particular change isn't in Red Hat 6.1 or 6.2. I don't know whether this would affect ServerBench performance, though. It's hard to tell without looking at the source.
Re:what were they doing when changing specs? (Score:4)
And you don't (or shouldn't) 'root' for any of the platforms you're testing when you benchmark. You go to a reasonable amount of trouble to make sure that you are testing what you think you are (and not some config hiccup that's hamstringing the results). But having done that, you sometimes still get a surprise. That's what happened here.
I think I get it (Score:4)
Of course, IANALOLT (I am not Alan Cox or Linus Torvalds), but it seems the most likely explanation to me...
DB2 on other platforms (Score:4)
This just went up on the TPC website Monday, there is a monster leader in transaction processing price/performance and that is:
You will not believe this unless you see it!
Yes - but check out the hardware. 32 four-way pentium Xeon's, and over a terabyte of disc space, and an obscene amount of RAM. That is not a standard setup, although it was built with standard parts (trust me - I know the team which built it). That is not to say that the DB2 team isn't extremely pleased with this result :-)
Just because it's running on Windows 2000 does not automatically mean that there might not be better choices for an OS to support this benchmark. It's not even entirely clear to me that Windows NT might not have been faster here, given the benchmarks which MS put out on their own website showing that Windows 2000 does better in limited memory, but is worse than NT above 128MB (and these machines had a lot more than that). Remember that DB2 UDB has a shared-nothing architecture which that it scales extremely well and is additionally capable of using raw devices so the OS in question may not have a big impact on performance. And DB2 runs on most platforms out there, from OS/2, AIX, HP-UX, Solaris, Linux, Windows 9x/NT/2000, SGI, SCO, Dynix and various 64 bit platforms as well.
Of course, it would be nice to have some side-by-side benchmarks of DB2 UDB on Windows NT/2000 and DB2 UDB on Linux. There will almost certainly be some benchmarks on Linux sooner or later - since IBM has made Linux available for all its machines, it makes sense to publicise the performance of its flagship DB product on Linux as well.
Cheers,
Toby Haynes
P.S. I work on DB2 UDB development.
Testing (Score:4)
My credibility is fine (Score:5)
Where did I say that 2.4 will fix everything?
I said that there is a specific problem, known in 2.2 that has turned up before, that is a potential explanation for this bad result.
There are other known (and fixed) scheduler problems.
Encountering any combination of these in 2.2 benchmarks is to be expected. Don't make these out to be more or less than indications that 2.2 had some obvious room for improvement.
I am sure that 2.4 will have more problems. However many problems that turned up in benchmarking 2.2 have been fixed (because they turned up in benchmarking 2.2), and preliminary benchmarks of 2.4 (eg the recent SpecWeb result where it nearly tripled Windows 2000 on a similar 4 CPU box) indicate this.
Now will 2.4 be ready for the enterprise, as they like to say? Not really. First of all until it has been through a few point releases, I would expect some significant bugs. (To be expected in any software.) Aside from that issue, it lacks many managability tools, a volume manager, more work needs to be done on failover, journaling filesystems are needed, etc. I have been convinced by Larry McVoy's argument that further work on SMP is not needed, NUMA (done through clustering and virtual operating systems) is.
These are known problems. Work is being done on them. However there will be room for complaint about Linux vs more mature systems for some time to come. However problems are getting solved, and Linux is moving up the food chain, fast.
Regards,
Ben
Mindcraft issues still? (Score:5)
The "thundering herd" problem that was identified in Mindcraft and fixed in 2.4, isn't that still present in RedHat 6.1? (BTW calling it "Linux 6.1" really irritated me.) That could explain a sudden drop-off. It is not a problem, not a problem, then suddenly becomes a problem and as soon as you get a slow-down, you get a real traffic jam.
Just guessing...
Ben
Why is it alwasy Linux v Windows... (Score:5)
I know this sounds strange but when I'm looking at designing a high transaction application or site I don't even LOOK at Windows or Linux. Does it suprise me that Linux doesn't scale to the enterprise market ? No, its written by individuals for lowish demand systems that they require, rather than by Company A who is implementing for Company B something that will cost several million pounds of development.
These sort of tests are IMO unfair to Linux. Should you use NT/W2K or Linux for your high transaction application/site ? The choice is more normally "Should I use True64, AIX or Solaris ?".
Linux works great for me as a webserver, as a client who takes a limited number of hits at a cheap price. If you want to scale you buy more boxes.
On the back end use a large end server with lots of RAM that has a massive IO throughput.
Does Linux really want to compete at the levels of AIX and Solaris ? Why not go for the niche, of cheap, reliable, and easy to scale horizontally.
Re:Mindcraft issues still? (Score:5)
Yeah, the box was dual cpu and dual ethernet card, designed to show the weaknesses of linux networking as of the 2.2 kernels.
However, as more recent benchmarks show, [slashdot.org]the soon to be released TUX package (from Redhat, GPLd) does extremely well in multi-cpu multi-ethernet card environments. These changes are likely to become embedded in Apache.
I'd be really surprised if anyone has an x86 OS that could beat the one Ingo Molnar set up for the SpecBench tests. It more than tripled the Windows machine under unrealistically high loads with flat file service - 4 CPUs, 4 Gigabit ethernet cards.
There are also issues about scheduling for high loads such as the one in the ZDNet article that have been addressed by a patch from IBM.
Linux Tweaks (Score:5)
But they also did this:
echo 100 5000 640 2560 150 30000 5000 1884 2 >/proc/sys/vm/bdflush
From
Table 2-2: Parameters in
Value (default/tweaked)
nfract (40/100)
Percentage of buffer cache dirty to activate bdflush
ndirty (500/5000)
Maximum number of dirty blocks to write out per wake-cycle
nrefill (64/640)
Number of clean buffers to try to obtain each time we call refill
nref_dirt (256/2560)
buffer threshold for activating bdflush when trying to refill buffers.
dummy (500/150)
Unused
age_buffer (3000/30000)
Time for normal buffer to age before we flush it
age_super (500/5000)
Time for superblock to age before we flush it
dummy (1884/1884)
Unused
dummy (2/2)
Unused
Tweakers may also be interested in reading
2.2 kernels used (Score:5)
ServerBench is not available in source code, and the testing was done by ZDNet. From what i know about ServerBench it uses a threaded IO model on NT, but a fork/process model on Linux. The Linux 'solution' is coded by ZDNet, with no possibility from us to influence/comment the design and approach used at all. Even under these circumstances we expect the 2.4 Linux kernel to perform significantly better in ServerBench than 2.2 kernels. The 2.2.1x (and late 2.3.x) kernels had some VM problems, and with increasing VM utilization (more clients) this problem could have been triggered.
SPECweb99 OTOH is a standardized benchmark with full source-code access (ServerBench are closed binaries), so all SPECweb99 implementational details are visible.
Nevertheless it's technically possible that ServerBench triggers performance bugs in Linux - we'd love to see the source to fix those bugs ASAP, if they are still present in 2.4.