NT vs. Linux: Again 816
Jeff Molloy writes "The results are here
link " It's a
shame Linux didn't win, but it looks like the tests show where Linux might
have some deficiencies. Overall, it looks better than the original test, though.
Re:Good analogy (Score:1)
Wouldn't more people benefit from MS management seeing to it that some QA/testing people get hired?
How about fewer new "features" and better implementation of those that are already there?
Re:what makes NT faster? (Score:1)
Re:Maybe it's the compiler? (Score:1)
Why should anyone? VC++ is basically a one-platform solution (x86) while GCC pretty much runs on everything under the sun. GCC runs on the Amiga. Does VC++? Nope. When you are dealing with a complier that runs on multiple platforms/processors the kind of optimization you are talking about can be a *REAL* headache to deal with and should be steered clear of...
Re:BZZZZT (Score:1)
This is a pyrrhic cheat- you can't use it on a real web server. It has nothing to do with CPU scheduling and is purely a hack to optimize benchmarks for intranet requests
How do you know MS are doing this (cheating) - can't you just say, oh, well, our tcp/ip stack needs to be multithreaded. It seems like you are trying to mislead - introducing little hints here and there that this was all faked.
and that CPU scheduling is the only consideration in doing this, because the only algorithm that exists is serve-upon-request?
In that case, it would be interesting to see Linux against NT running a different web server. We've already seen that the bottleneck exists in Linux, even when using a different web server. Certainly, if we were to see that say, solaris kicked NT and Linux's ass, you wouldn't suggest it had something to do with sun running round cheating.
Since it was shown (dispute it as you wish) that Linux's bottleneck is it's tcp/ip stack - I don't see that your argument about algorithms has any relevance in this thread.
Beat 'em at their own game (Score:1)
I'll neither agree nor disagree with that as I have no knowledge in the subject. What I will do is to offer this thought:
If this hardware really was where NT shines, what happens when linux gets tweaked to take better advantage of it? The Microsoft folks have nowhere else to go.
So, I say to you folks, take heart. Accept this setback for it is not defeat. Remember, that which does not kill us...
troll: go fuck yourself (Score:1)
Why don't you just stay at winfiles.com and stay the fuck away from slashdot?
NT stability vs. Linux stability (Score:1)
/* Steinar */
Re:what makes NT faster? (Score:1)
Microsoft will just keep inventing benchmarks that happen to
make NT look better. Nothing can be done about it, other
than observing that those benchmarks will become less
realistic every month.
The only way arround this would be for Linux (Apache and
Samba) to copy the same "unapproved benchmark veto"
clause which makes the publication of truly independant
benchmarks unlikely.
Re: I'd rather have perl do me (Score:1)
Re:Got a fishing license? (Score:1)
All I'm saying is that saying that Linux is free and NT costs $$$ is NOT a very good argument.
Re:NT and Linux differences. (Score:1)
>NT has chosen performance over stability.
I wouldn't put it that way... Perhaps a better way to phrase it would be: Linux has chosen a pessisimistic approach to application stability over general performance.
Linux's use of processes vs threads only has merit if you assume that the processes you are running have bugs (and will crash). It really seems to suit the open-source model to be more optimistic concerning application code and give it the benefit of the doubt along with a hefty performance boost (in the form of threading). Does anyone doubt that Apache or Zeus could use the thread model, remain stable, and thereby match NT's performance?
Oxryly
Re:Didnit I read ESR talking about this... (Score:1)
To paraphrase Mr. Torvalds....
Microsoft is just being a good Linux user and reporting bugs. The same can be said about ZDnet.
Getting your butt kicked every once in awhile (metaphorically speaking) can be a good thing. It keeps one from becoming arrogant and complacent. It can motivate you to do better and try harder.
Just think how bad American cars would still be if the Japanese hadn't come into the market. (Not to say they are the best, but they are a hell of a lot better than they were 10 years ago).
What redhat had to say about benchmarks. (Score:1)
They said one very important thing. Mindcraft was not able to duplicate their results.
About the the ZDnet benchmarks. They happen to agree with them, but not in a negative way.
They reminded us to be like Linus who has kept a sence of humor and perspective about all this.
Accoring to the folks at redhat, Linus said....
"Mirosoft is just being a good Linux user and reporting bugs. You can say the same about ZDNet."
I beleive that ZDnet was trying to be as fair as possible. They firmly believe in the future of Linux, and have stated that publicly several times. They are doing their part in helping it to become a better OS through constructive critisism.
IMHO MS is going to lose out in the long run as long as the Linux community remains honest about it's shortcomings. No multimillion dollar spin masters to hide the warts. No FUD. Just keep getting better and better, and Linux will win.
Re:what makes NT faster? (Score:1)
I would never expect anyone on Slashdot to write "
If raw speed is your monkey, then NT is the tool. "
Hehehe... Well, but we can't dispute that. Right know NT is faster.
Re:Linux is not the fastest. No excuses. (Score:1)
Oxryly
Re:You guys sound so lame (Score:1)
LOL
1. Win2K's interface is not improved. It sucks. NT4 was good. I get paid to admin NT4. I like NT4. Win2k is a major step backwards in usability. The ungodly number of wizards in NT5 (oops win2k) makes it impossible to do any real work. Sure you can turn them off, but the mere sight of them drives me nuts - it's like having 4000 of those fscking dancing paperclips. This is supposed to be a server os - wizards don't belong on a server os.
I have to disagree there. I think the W2K gui a slight improvement (GUI here, not tools). The new MMC is great. You can administer everything from one program (including add devices, read event logs, add users, make shares etc). What's more, you can do it to remote machines...seemlessly.
2. Win2k's performance. This sucks too. Win2k takes ages to boot. Once it's up, using office 2k takes far longer than NT4 + Office97 ever did. My box is a PII 450 128Mb of RAM, I know that's not enough for the 2k products, but the company won't splurge for an upgrade.
I think it's wonderful!!! My W2K box boots in no time, and it boots in even less time when i use the cool new hibernate (memory to disk) feature. yummy. I'm running a K6-200 with 192MB ram (did have 64MB, but it was a bit sluggish with all the services installed).
3. Stability. This is anecdotal, but I've had more lockups (5) and blue screens (1) with NT5 than I had on the same box with NT4 (3)lock and (0)BSOD - Admitedly it's still in beta
Wow, I haven't had any bluescreens except one where i installed an unsigned NT4 driver i was warned not to install..after that, everything else was perfect...been running for weeks with no problems with BSODs (it's more purple now tho
4. Ease of development. There is a special place in the most fiery pit of hell for someone who names a function RegisterServiceCtrlHandlerW() Don't tell me that Win32 makes life easier for developers. It spawns carpal tunnel is what it does Again i disagree, Windows is the most develop friendly OS...even 90% of *those* java developers use Windows. It's got brilliant IDEs, which make up for the long API names, but remember, VC++ has intellisense, so you don't have to spend too much time typing, or going round documentation trying to remember what arguments you need to pass.
I'd glady have long function names, than horrible IDEs without intellisense! Besides, RegisterServiceCtrlHandlerW makes perfect sense
as for your experience at MS...uh,
Re:Linux is not the fastest. No excuses. (Score:1)
But meanwhile though I run W2K beta 3 as a development system to be productive and it does everything I need it to do, quickly and without bugs or crashes. Harrumph.
Oxryly
Re:MS is afraid, Very afraid (Score:1)
Re:Pricing is the most important thing (Score:1)
Re:Lock granularity (Score:1)
Of course, that doesn't mean it wouldn't hurt to beef up the scalability, and that's what's planned for 2.4/3.0 anyhoo...
-grendel drago
Where is the configuration document ? (Score:1)
Re:I am SHOCKED! (Score:1)
Re:Can anyone do math? (Score:1)
and
"...Oh, if you're serving up >1800 files per second of 2k files, who are you?..."
Flying Crocodile, Inc. (www.flyingcroc.com). We have FreeBSD/Apache machines running at 27Mb/s. This is on off the shelf Pent-II boxes, ok, the have 1GB of ram and UW-SCSI drives... but single cpu boards. With the new FreeBSD 3.2-RELEASE the above mentioned box runs with a load of 3.3 and over half the cpu idle.
The numbers that you talk about are the numbers that I deal with everyday. We would never think of running NT. With over 130 servers we couldn't afford the massive staff to sit around and reboot the boxes all day and night... that and I would hate to have to wire a monitor/keyboard/mouse to each box!
The servers together do over 145M hits per day and thank god the Cisco GSR12008 is shipping next week, the three 7507's are hammered!
"...Oh, one more thing. If this is all on an intranet, you'll still need Gigabit ethernet if you're serving up the 10k+ files..."
More like 2 GE to Frontier and Teleglobe, with various other T3's get the job done. BTW: before you start crunching numbers, not all the 130 servers are cranking 27Mb/s, many are doing massive database work.
Another interesting number, at our peak in the day we route about 79,000 packets per second, figure that about 1/4 of those are http requests. Peak total for today was 419Mb/s using mrtg.
IMHO: I use to be a Linux nut, still use it for desktop work, but FreeBSD kicks ass when it comes to serving. If you think my numbers are crazy, Yahoo trucks twice the bandwidth, no wonder they use FreeBSD too.
The tests have been done by those who's bussiness is to crank out the hits. Most use neither NT nor Linux.
If you doubt this post, just do a looking-glass on 207.246.128.0/20... the connections exist.
Re: Yes, it turns out vader was a good guy (Score:1)
Re:why don't you read the artcile again (Score:1)
Re:Linux blows period. (Score:1)
at my UNM, just the SRC computer pod with 16 clients seems somewhat confused if you do something rash on one of them, like dare to open netscape or something. NT sucks.
Re:what makes NT faster? (Score:1)
Re:Some things to keep in mind (Score:1)
My point is this test only set out to show that given a certain hardware setup with excellent *theoretic* performance for an interesting task (web and file serving) the OS and application setup that gives the best *actual* performance is XYZ. In this particular case XYZ happened to be NT with IIS.
There are a significant number of supplementary issues that any potential OS customer must consider in addition to the information derived from the results of this test. That should in no way detract from the importance of the results of this test.
And importantly, as members and potential contributors to the open-source movement the results of this test give us an excellent report card on the progress of Linux development. I really don't think the test of the results should be excused away for any reason.
Oxryly
Re:ality check (Score:1)
When two or more tasks legititmately belong in the same address space (e.g. must manipulate the same objects in memory), sure, multithreading is the way to go.
When you're just kludging a substitute for fork() because spawning processes is too expensive on your architecture, that's not so good from a stability or security standpoint, although spawning new threads.
None of this has anything to do with lack of multithreading in the Linux IP stack, which undoubtedly would be a good thing.
Re:A few things (Score:1)
Tests were done on P133's with 64M ram and BSDi
walked all over NT.
http://www.BSDI.COM/press/19960827
Would be nice to see a rematch.
Re:Linux is not the fastest. No excuses. (Score:1)
They used a SMP-Patch. And then Linux was in every test at least as fast as NT, if there is only one network-card in the computer. If they used two network cards NT was about 100% faster than Linux. NT was very, very slow when using Perl-scripts.
They showed also a test on Mac OS X. The results shows very good performance of the Mac OS X although the Mac hardware was no comparable with the hardware used for NT and Linux system (i.e. 128MB RAM in Mac and 2GB in NT/Linux-Server). But it seems, that Mac OS X has a bug which causes a "system panic" when using special CGI-Scripts.
Re:are you sure about smb in the NT kernel? (Score:1)
Re:I'm not sure (Score:1)
would win... which it did not.
Has the full configuration and "treak list" been published at
all. AFAIK only the fact that the Linux team were specifically
forbidden from performing certain tweaks.
Also were any of the tests carried out with ordinary NT
which is a far better match to RedHat 6 than NT Enterprise...
Or maybe someone should have got them to build a
"RedHat Enterprise" everything compiled for these
high end machines. (And incapable of running on
low end machines.)
Another of the original issues was logging, if this is
being done properly then every SMB or HTTP connection
will generate a synchronous write to disk. This will
slow things down. AFAIK NT dosn't log anything to do
with file shareing by default. In the original tests IIS was
placed in a mode of buffering the logging information
and writing in chunks. (In the real world you may as well
turn off the logging all together as use this option.) Did
Apache and Samba have all logging turned off (or
directed to
Linux not bad under REAL-WORLD conditions! (c´t) (Score:1)
I think one has to accept that at the moment NT is slightly faster considering the maximum output of a web or file server.
But as German PC weekly "ct" found in their own benchmarks (issue 13/99, pp. 186), Linux is still a very good choice under real world conditions. They tested SuSE Linux 6.1 and NT 4.0 SP4 on a 4-XEON-450 Siemens machine. The main difference between their configuration and the Mindcraft one was that they just had one Ethernet card (instead of FOUR!)in the system.
They said it was not realistic (except for a few Intranets maybe) that a web server has to serve more than 100 MBit/s or even more than 10 MBit/s. Under these circumstances Linux was slightly faster with static web pages and much faster with serving CGI. However, ct didnt test MS IIS with ASP (hard to find a fair benchmark between Perl/CGI and ASP anyway).
Only when they tested the system with a second Ethernet card, simulating similar loads to the ones in the Mindcraft tests, NT was significantly better (and scaled the CPUs much better than Linux.
What they also found out is that NT was much worse with serving from the HD instead of the memory (maybe because they also used one big partition instead of smaller ones, which seems to slow down NTFS. The bottom line: Linux with Apache is a very suitable and fast system for real-world (mid-size) web serving needs, mainly if you have to deal with a lot of dynamic pages (like on Slashdot
IP bug? (Score:1)
Why aren't they back-porting that multi-threaded IP bug thing to 2.2?
---
Put Hemos through English 101!
Well now we know what to fix (Score:1)
test this sort of thing). We now know where the problems are (OK We already did) Lets fix them and then challenge for a rematch.
MS is afraid, Very afraid (Score:1)
I think Linux will come out ahead in the long run. It's only a matter of time.
Our day will come... (Score:1)
It does disturb me somewhat to see that Linux loses on the single proc box, but this seems to come down to the tuning. Out of the box, Linux is the faster (as other benchmarks have illustrated), but when tuned, NT is better.
I think they ought to make this an annual competition and see how they match up every year. I bet next year the results won't be so slanted in MS's favor.
---
I have an idea (Score:5)
If you can't help program then go out and test all this new stuff and send in bug reports. Let's have Linux set the standard again. It seems like, acording to the article, it was this way once and we lost it because Microsoft has pushed the bar a little higher and we lagged behind.
Re:NT and Linux differences. (Score:1)
"Linux has a micro kernel. There are only these things in it, which are needed".
Re:what makes NT faster? (Score:1)
Re:Can anyone do math? (Score:2)
Unfortunatly, I'm wasn't referring to anything so fancy. More just being sarcastic, because the throughput difference between Apache and IIS is hardly ever going to be the deciding factor.
(In the largest NT/IIS setup I've seen, there were three actual web servers. They were 'clustered' only on the switch level. The assumption was that one of servers would be down at any given point in time. A desktop box was running software which checked if IIS was running, and if it had died, attempted to restart the service. If that failed, it rebooted the box.)
--
Linux PR and Trash-Talk Numbers (Score:2)
At the end of the day, the smart companies have only two questions about IS technology:
1: Can I do more with this?
2: Can I do the same job cheaper with this?
All the other numbers are indirect data, trash talk. Management--especially smart management, doesn't directly care about MIPS, MTBF, or benchmark numbers. They care about the two questions above, and care about the other numbers indirectly because those numbers tend to be good predictors of the answers to the real questions. In this business, when almost everything is potential, these early indicators are very important, because you can't get good answers to the top two questions.
You have the same thing in sports. You can measure free-throw percentage, height, weight, slugging averate, save percentage, and a host of other details. But at the end of the day, only one question matters into it: How often do you win? All the rest are trash-talk numbers--good predictors, but not the bottom line.
In sports and business, you have to have those trash-talk numbers for people to give you a chance. If you weigh a trim 175lb, nobody in their right mind is going to make a nose tackle out of you--you won't get the chance to show the coach that you can topple the 325lb center. If a product has enough benchmarks damning it, the vendor will pull support and recoup its losses.
This is why Linux can ignore the trash-talk and go straight to increasing capabilities and lowering costs. Linux isn't a business; vendors cannot cut all support. Nobody has the power to tell Linux that it cannot enter the IS world. It can't get cut, and can only get discontinued if every Linux geek in Creation decides to spontaneously drop it. Red Hat and Caldera can go belly-up, Torvalds and Cox could be swallowed up in earthquakes, and Linux will keep on existing.
So long as Linux exists, it can win. With the development advantages it has, it can win well. It needs a foothold in some IS shops; it's getting that, or has already gotten that.
If Linux wins, it is going to start by revolutionizing an IS department. Some big gun like AOL will see the potential and let it start taking over the infrastructure. It will work. Forget the runtime, forget the performance, it will do the job for cheaper. In the business world, such success gets copied. People look at the company that pulls this off, ask how they do it, and see a room full of Linux boxen.
The IT budget will convince more smart managers than any amount of benchmarking will.
PR is still relevant, but only in the short term. Good or bad PR can accelerate or slow the rate of Linux installation. In the long term, however, the success of Linux will have nothing to do with the benchmark numbers and have everything to do with the budget numbers. If Linux can do the job cheaper, it will win. If it can't, it will remain a hobby OS.
But the good news is that, unlike a corporate product, short term effects cannot destroy the long term picture. Linux will have all the time it needs to fit into the corporate structure to its best abilities.
Fantastic... Linux is winning! (Score:2)
Who in real life will have this kind of hardware and get that kind of support from Microsoft?
Boy are Microsoft in trouble!
Reality and Fantasy Land... (Score:2)
However... for very small organisations, I run an ftp server, web server internal DNS, NIS, SMB (there is one Win95 machine) etc... for a small network comprised almost entirely of old 486s w/16MB mem, and 400MB HDs, Linux is the _only_ choice. NT wouldnt run on these machines and 95 isnt pretty! My home LAN which cost me less than $300 for 6 machines hub cables and all, plus $1600 for my main system (which I bought new and is now rather an outdated P166 w/8.4GHD+3.5GHD 64MB RAM) (7 machines in total). It performs great for my needs. If I were a small business, I think that I would have to think twice or three times before outlaying large sums of money to M$ for a system that was so over my needs, instead of using a system that would cost me so little.
Linux offers computing 'solutions' where NT offers computing 'problems'.
Bang/$, Linux will always win. Cost of upgrade since Linux 1.x software and all... $0
Cost of upgrade since DOS for M$ softward and all?...
anyone care to speculate?
Do we need to improve Linux' high end performance just for the sake of benchmarks? possibly not, but It wouldnt hurt.
Re:Oh yeah? My Experience... (Score:2)
I don't know if this helps, but I've been able to trace a couple "solid lock" NT problems to SCSI cabling problems. One of these was on a new Dell server that shipped with a loose cable. NT doesn't seem to handle SCSI issues very well.
--
Can anyone do math? (Score:5)
1800 hits/sec * average 2k/hit * 8192 kbits/kbyte = 29,491,200 bits/sec, or 29.5 MBits/sec. What's that now, a T3 line? I know that a T1 line is 1.5 MBits/sec. Ok, so apache on one of these boxes can fille the equivalent of 19.6 T1 lines by itself. If (a bit more realistically, how many 2k files get those types of hits) those are just 10k files (let's not get into pictures), that's 147.5 MBits/sec, more than filling a T3 line, IIRC, and definitely filling up aapr. 98.3 T1 lines.
What's the problem with Linux/Apache, now?
May I suggest, if you can afford this sort of bandwidth, that you buy one of those 32CPU sun E10000 servers and call it a day? (or a server farm of linux boxes, since you're serving up static files.)
Oh, if you're serving up >1800 files per second of 2k files, who are you?
Oh, one more thing. If this is all on an intranet, you'll still need Gigabit ethernet if you're serving up the 10k+ files, so the sun box still applies to you.
Re:Some things to keep in mind (Score:3)
Price/Performance
It's all related to how much you pay your admins and how well they administer your system. This isn't a function of the OS. Yes, Linux costs less out of the box, but an NT admin is going to have a harder time (and thus charge more) to set it up than he is an NT system. If a business currently has functioning NT systems and competent NT sysadmins, why should they switch to Linux?
Clustering
How many small businesses who are choosing between Linux and NT need to, want to, or care about the ability to cluster? People who care about this benchmark are not the same people who need to run clusters.
Other Hardware Configurations
How much would it cost for a company to build a Linux-happy system? Most systems built today (and the systems that we want Linux to run on) are built for Microsoft. You'd need a custom-built, custom-designed solution to truly grab all of Linux's power, and that costs money, either in man-hours or purchasing power. The results of this test would've been far more atypical if they had built both machines finely tuned for Linux. At least this time around, they weren't blatantly geared towards Microsoft.
Security
Security, I'd say, is 75% system administration and 25% OS. Linux has its security problems as well, most of which can be plugged up with effective network management. Many of NT's can, too. MS may be a lot more apathetic to security concerns, but they don't run the systems, they sell them. I don't consider Linux or NT any more secure than the other.
Stability
Stability can be completely a function of management. I've heard stories of Linux systems stay up for months or years. Guess what, I've heard the same stories about NT as well. I've also heard stories about unstable Linux systems. I've seen no long term studies done on system stability, so everything I hear about stability I file away under anecdotal evidence, not hard verifiable data.
Change real world needs
What's good for the goose is good for the gander. I don't see how this benefits Linux. Change the system and, whoa, Linux might perform worse under that setup. It happens to both types of OSes, and before you say, "It happens to Linux less!" find some hard data, not stories.
The Future
Past trends do not determine future performance. I doubt Linux will keep up its 212%/year growth and Linus has already said that upgrades aren't going to be as drastic as 2.0 to 2.2. Don't assume that Linux will advance in the next three years as it has in the past three years.
Re:Better but not quite (Score:2)
Perhaps, while you are creating your new law, you would take the time to spell "Beowulf" correctly? I usually avoid spelling corrections (although lord that's difficult some days) but if you're going to call someone a "dumbass" and a "moron" then perhaps you should be concerned with how you appear as well.
This spelling flame contains no tyops :)
Didnit I read ESR talking about this... (Score:2)
--------------------------------------------
Be careful about how you spin (Score:2)
But we need to be careful. If you'll note, the article notes that Samba won in the earlier SMB tests tests because there was a performance hit in NT due to the transaction log. Which is a stability / robustness feature that Linux simply lacks, and would be better off having if availability and fault-tolerance are the primary design goal.
We're treading on dangerous ground... PR is like a game of chess, and the community needs to be careful about spouting out this kind of spin which can quickly become a rallying point and then proven foolish if it isn't well-though through.
Re:Pricing (Score:2)
You're economic argument doesn't "scale" beyond small business however.
Here's why -- Any company with more than a few hundred seats has a site licence contract with Microsoft. The cost is much more dependant on client seats then number of servers. This is to cover the client OSes and MS Office.
The cost of extending the contract to add a few additional NT servers to the mix is miniscule. Compare this to the cost of hiring capable Unix admins, and for any medium sized business, you're not saving any money with Linux.
--
Re:Huh? (Score:2)
It's entirely possible some people speak English as a second language.
Re:Lock granularity (Score:2)
Ouch.
It means what? (Was:Static page requests, BAH!...) (Score:2)
There are probably a hundred (a thousand?)
If Micros~1 really wants to beat Linux in general purpose operating system performance, they need to take this approach with *all* other applications. Start by integrating BackOffice, the rest of IIS, IE, Office (why restrict this brilliant strategy to server-only apps? MS should surely strive for the fastest desktop also) and their other in-house applications into the kernel. Then they will FLY!
Of course, *some* of this is actually good from an engineering perspective. Common functions that are essential to the performance of standard and widely used services -- and can be significantly improved by moving them into kernel space -- may justify this approach. Large chunks of application-specific functionality, however, will weigh down non-users of those apps and compromise stability for those who do use it.
Realisticly, what I think MS has done here is create a "benchmark special". They have picked two high-profile applications and integrated them into the kernel a little too intimately so they can claim that NT in general is faster than Linux. The actual usefulness, of the web server speed up anyway, is questionable. Do *any* sites actually serve that many static pages? And, how many of those sites can afford the instability that such approaches bring?
Sorry, Microsoft. What you have created is an NT/Web server/file server combination that is faster than Linux in those same areas. That does not make NT the faster operating system -- and it most certainly doesn't make it the better operating system. Meanwhile, you have pointed out what are now high-profile areas of minor weakness in Linux performance. Those will be fixed -- and fixed correctly. Thanks.
"Hey Boss, I can save you 300 grand..." (Score:4)
"When every minute of downtime can mean millions of dollars in lost revenue, companies generally rely on applications that run on OS/390, Tandem NonStop Kernel, Digital OpenVMS, or Unix operating systems. But Windows NT is increasingly being deployed... so IT managers must find ways to increase the availability of their NT environments. To do it, they're adopting products and services that promise to provide extra protection..."
" 'Any system with lag time is unacceptable for running the application' says William Harris, NT Administrator for the Ohio Utilities. 'Money wasn't even a big deal. I's rather get quality and reliability and availability'. The organization...paid $75,000 to implement the (third party protection) system.
Translation (for those who need it): Management is telling IT they have to transition to NT. IT says, in order to be stable, we have to add third party help. Management says: "Here's a blank check."
It goes on to say that Unix, w/o third party software or service achieves "availability in the 99.9% range, as opposed to 97% for NT."
Now, what's the difference to a business between 97% and 99.9%?
IBM's NetFinity Availability Program guarantees 99.9 w/ NT. Cost: $220,000.
HP Mission Critical guarantees 99.9 with NT for a mere $300,000.
Imagine going to your boss and saying "Hey, how'd you like to save $300,000?"
JL Culp
Business Technology Consultant
Chair, LPSC
Re:MS is afraid, Very afraid (Score:2)
Linux should be afraid. Fear is the perfect motivator. Does it suck being afraid all the time? Perhaps. But it keeps the mountain climber on the mountain, and keeps your users from being afraid of falling behind (for whatever reason they need to move ahead).
Re:MS is afraid, Very afraid (Score:2)
Where I work, it's a firing offense to use the servers for your personal use, like web browsing. Most other companies take a similarly dim view of such activities.
Re:Pricing (Score:2)
Not everyone works out of their bedroom.
Re:Our day will come... (Score:2)
"Linux: Do it your damn self, and stop bothering us."
Re:Microsoft's capabilitys.... (Score:2)
Please to be pointing out the PhD theses written by any of them?
My expierences w/ Win2k beta 3 (Score:2)
(First and foremost, these are just my impressions of Win2k...not cut in stone by any means)
First, my computer is a p200 mxx, 64 megs, ~1 gig NTFS, ~2 gigs ext2. W2K found all my devices and configured then almost perfectly. The only thing it didn't get was my Voodoo 2, but I can run GL Quake in Linux
The system runs faster than NT 4 ever did. Some of you may than scoff NT 4's performance, but let me say this: I started using Linux because NT 4 was too slow. W2K (approximately) matches the speed of Linux in performing tasks (starting WP vs starting Word97). There's only one other nice change. It hasn't BSoD'd yet. Its stable and quick.
Now, for all you Linux zealots: problems w/ win2k.
Its a beta. I understand that. But it really shouldn't stop being able to look up things via DNS. Its an infrequent problem, but its annoying.
Next, it does kinda take over for you too much. I was surpirsed after a while of using W2k that my application icons in the start menu had disappeared...Windows had a cheerful message telling me that it had optimized my Start menu. I really would have prefered if I could have asked it to do that for me, but ah well. Next, I used to run NT in 1600x1200 perfectly. W2k seems to have trouble drawing at that resolution...I had to revert to 1280x1024 (fyi, its a Matrox G200 SD, 8 meg - drivers come w/ Win2k).
Conclusion. If MS can clean up the problems, Win2k will be *very* nice. Although it can't run servers up the wazoo like Linux can (than again, NT Workstations was never designed to run servers, and therefore shouldn't be tested, IMNSHO), it runs well, far better than any previous MS OS.
Note to MS: Open up the source to NT/W2K. Open Source development of NT would speed up removal of bugs, and I would think that NT would probably speed up as a result. Plus, if the good of Linux and the good of NT could be mixed together into an GPL uber-OS, I would be happy...hell, I would even pay for it...
--------------------------
Re:ZD benchmarks (Score:2)
Maybe when you were working there, you thumbed through a ZD publication or two.
What do you know? They're full of ads for Intel and NT-based products. Big Super Suprise!
--
Maybe it's the compiler? (Score:2)
nobody has commented on this fact yet.
Although GCC is one of the most portable compilers, the RTL generation routines aren't well suited
to the register-poor x86 architecture. The main difference, however, is the code scheduler.
GCC doesn't do much P6 style optimization, where VC++ in conjuction with Vtune from intel is quite
an effective optimization tool for the x86...
It would interesting (but unfortunatly impossible task) to find out how much of the difference is
due to simply the difference in compilers...
Just my 2cents worth...
Kind of shortsighted (Score:2)
Isn't that kind of shortsighted?
Linux has many advantages over NT, including increased configurability, open source, GNU utilities, etc.
A few skewed benchmarks don't mean anything. There are benchmarks out there that show Linux beating Windows NT. Benchmarks don't mean that much when there are a lot of other benefits to a platform. It's kind of bad to base your decisions on one benchmark, when you have to consider everything else that you get with the system..
Samba? Samba?? How about NFS? (Score:2)
Fine. Why don't you design a benchmark that you think the free Unixes will do better at? I mean computer performance, not price performance. :-) One obvious thing that comes to mind is this risible Samba thing being replaced by NFS. Another is generating dynamic pages instead of static ones. But that's just the start. What else would prove interesting?
And what about running a variety of operating systems on the same hardware? What about BSD? What about Solaris for an x86?
Lock granularity (Score:2)
Linux, on the other hand, was originally written for single proc. As I understand it, they only recently started supporting SMP -- and then by having large granularity locks that keep multiple processors out of huge sections of the code at a time. (The article talked about Linux having a single lock around the whole TCP/IP stack!) To fix this, you basically have to go over every line of the code and only lock the things that need to be.
The interesting thing to me, is whether the Linux development model will support this well. Writing SMP code is much harder than single proc code. All those race conditions, deadlocks, and missed data contentions to worry about. People really have to understand what they're doing to get it right. Already there's complaints about the 2.2 kernels not being as stable as the earlier single big lock kernels.
Of course lock granularity doesn't explain the whole picture. NT still trounced Linux pretty badly in even the single proc case. There, I suspect it's just a matter of Microsoft having a greater number of highly qualified people working on the system than Linux does. Not that Linux doesn't have any highly qualified people, but rather that MS can get more of them. Paying people for their labor actually seems to work sometimes.
Re:what about single CPU tes (Score:2)
Good question, but as I wasn't there (I was invited, but declined as I was giving a talk at the Paris Linux Expo) I can only speculate.
I doubt it was the context switches, as in all the tests I've done on NetBench these are down in the noise. In this case it may have been the filesystem as it takes some tricks to ensure you are running with an optimal ext2 setup (and rememver there were NT kernel people there tuning the NTFS setup). But Ingo at RedHat has done quite a bit of work on this so I'm still hopeful for a 2.4. re-test.
Regards,
Jeremy Allison,
Samba Team.
One more advantage (Score:2)
How many different servers can you put onto one machine with NT? With Linux? What kind of performance do you get when you have a mail server, DNS server, Web server, etc. all on one machine on NT versus Linux?
These are all things to consider before dismissing Linux because of one benchmark.
Re:I'm not sure (Score:2)
---
Benchmarks are good for one thing: (Score:2)
Benchmarks are fine and great and all, but in all my personal experience changing servers from NT to Linux gave everyone a performence increase... I know this is mearly anecdotal evidence at best, but that's what has worked for me.
[Silly Analogy]
As for the samba tests.. it's something like this: Microsoft makes up a game. Microsoft doesn't tell you how to play the game. You try to learn the game... Microsoft beats you by a little.
[/Silly Analogy]
Of course, this test doesn't show reliability though.. how long could they each handle those loads? Just the (what hour?) time it took to run the test or 24x7 for 6 months....
Anyway, to incorparate everyone other post we'll see: Well, we'll get better.
Excuse me... (Score:2)
"What fails to kill me makes me only stronger."
-F. Nietzsche
Thank you. Now quit crying and start coding.
That is all.
Re:what makes NT faster? (Score:3)
I think this feature explains, at least in part, NT's superiority in multiple-CPU raw service.
A side note to flamers: please, PLEASE don't treat these results as suspect or corrupt. I don't think they are. Don't think of them as a defeat, think of them, like ZD said, as a roadmap to show where Linux needs improvement.
ZD benchmarks (Score:3)
------------------------------
NT and Linux differences. (Score:5)
NT uses a multithreaded process model for IIS and SMB file-services that results in higher throughput but less stability. A single thread of the main process may die without completely destabilizing the server but if the main process dies then all child threads die.
Linux divorces the graphical user interface from the kernel thus ensuring stability (framebuffers are available for video enhancement though) and implements most services as userspace daemons.
Linux uses the forked process model to provide services to multiple users. This modem achieves stability in that if one process dies, the others continue as if nothing had happened. Both Apache and SAMBA operate in this way I believe.
NT has chosen performance over stability.
I believe that with kernel enhancements and profiling, any bottlenecks in the networking system can be eradicated causing Linux to perform much faster and possibly even beat NT in tests such as these.
Threads vs Forking, speed issues (Score:2)
The kernel has had fundamental support for them for a longer period of time, but a reliable thread safe libc (and libX11) did not surface until recently (the last year or so). Even now the distributions are struggling to get everything converted over to the new glibc.
Even though linux now has good thread support, gdb has trouble with them (last time I checked, which was a while back). Also, apache and samba are not linux-only products. The safest and most portable approach to take back then was to use a forking model.
Other than taking up an too much memory and thus causing swapping, I don't see why a threaded model would be any faster than forking model if pooling is used. Pooling keeps a fixed number of processes running all the time and dispatches request to them. When they finish they sleep and wait for the next request. This way you have the safety of a separate process space and you avoid the forking (as in you "forking piece of sheet") over head. You let each pooled process have a fixed lifetime in order to clean up any leaked memory. Some systems have libc leaks that can't be avoided, so this is important to long-term stability.
You might argue that a thread context switch is less expensive than a process context switch. Under NT (and 95), all threads are scheduled without regard to what process they belong to, so at most could skip some page table changes when going from one thread to the next. I doubt that is the case, because threads jump from ring3 to ring0 and back while executing system functions (ensuring they have different page tables). Also threads have a separate segment mapping for fs. A 3->0->3 context switch is accomplished by an interrupt in NT. One way to speed things up in linux might be to have a way to pool system request up in ring3 and then dispatch them all at once in ring0. NT did this with their GDI code. Doing this for all system calls wouldn't require a huge change in the kernel, but it would make user code harder to write as system calls have to be parallelized. GDI doesn't require any changes to user level code, because there are no return codes to worry about for graphic drawling operations. Xlib has the ability to que up commands as well. But that is not going to speed up apahche.
Another possible advantage to using a thread model is faster IPC. With something like Apache, there shouldn't be very much IPC going on except to dispatch a new request and possibly lock protect common files.
I have not looked at a single line of apache code, so I can't say for sure, but there seem to be a number of httpds running all the time, which would signify that it is using pooling. So why is it slower than the NT counterpart?
Re:Our day will come... (Score:2)
The only thing your post proves is that you don't have any operational experience with NT Server. The 16 color VGA or S3 driver running is not exactly "causing slowdowns" on your server. And if something is going to crash on that server, it's certainly not going to be the video driver (unless you have a hardware problem).
You're starting with the conclusion (NT has kernel graphics, Linux doesn't) and working backwards.
--
Now's our chance... (Score:5)
But here is our chance to show the world [and MS] why Linux and other OSS projects are such a good idea. By quickly implimenting fixes to the problems brought to light by these test, we can prove how much better OSS is.
Proposal: Annual or semi-annual benchmarking of NT [or the current MS server platform] and Linux [and any other OS's that want to compete I suppose]. By doing similar tests regularly, we can show how efficient OSS can be at fixing current shortcomings [as if 24hr bugfixes aren't enough].
Just a thought. ;)
BTW: Sorry for the overuse of the "OSS" buzzword
I'd help with implimenting fixes myself, but I'm not exactly an expert coder [I don't think "Hello World" will help Linux beat NT]
Ender
If at first you DO succeed, try not to look astonished!
Linux is not the fastest. No excuses. (Score:5)
But I don't think that this means that speed for web serving should be any more important. Getting back at Microsoft is not a reason to improve Linux in my book. There is are many other fronts that Linux heading toward like the desktop, embedded devices, and hand helds. I can imagine that if Linux is tweeked for web serving more than normal that some test will find Linux useless for embedded devices or something else that is important.
Microsoft right now sees Linux as direct competition as a server. It will be nice to see Linux compete back but don't expect NT to stand still. There are other servers also. How does Linux compare to Mac OS X?
And no more excuses. Linux is not the fastest. Deal with it.
For now.
--
Re:NT and Linux differences. (Score:2)
AHEM, HELLO, BULLSHIT. This exactly DOES NOT happen under the windows process/threading model, and IS what happens under most other models. I personally consider it a FEATURE when the main thread cleans up the suboordinate threads when it dies. If the main thread dies, YOUR APPLICATION HAS QUIT. There is no reason to keep the rest of the threads around.
I defy you to show how EITHER model adversely affects system stability.
Re:MS is afraid, Very afraid (Score:2)
NT is a 9-second Mustang that has something major break every couple of runs.
Linux is a Toyota Supra Turbo (my example) that can make it down the track in 11s, but also corner, stop, and go hundreds of runs without problems.
Part of NT's speed is from specialized hooks into the kernel for IIS, and SMB. They traded stability for performance.
Linux' design concentrates on stability, rather than speed. No specialized proprietary hooks into the kernel that add complexity. Not quite as fast on the track, but you don't have it blow up every couple passes.
For the price difference between NT and Linux, you can always spread the load over an additional machine to get the performance and keep the stability.
There is no question for good administrators what is more important. I choose stability and well-roundedness over the 9-sec. mustang any day...
I want to see a $1000 server comparison (Score:3)
"In the other corner, two cardboard boxes; one labeled 'Windows NT Server,' the other 'Microsoft IIS'..."
This all inspired by:
(Yeah yeah, apples to apples...)
-m
Re:NT and Linux differences. (Score:2)
Is there ANYBODY left on slashdot who knows what the hell they're talking about?
NT is a microkernel. They're embedding it now.
BeOS is a microkernel.
MacOS X is a microkernel.
HURD is a microkernel. (okay, that doesn't count)
Re:Can anyone do math? (Score:2)
1800 hits/sec * average 2k/hit * 8192 kbits/kbyte = 29,491,200 bits/sec, or 29.5 MBits/sec.
In other words, more than enough to saturate your 100Mbit ethernet line. (I think they used 4 NICs in the original test.)
I think you're making a much better pro-Linux argument then all of the folks here jabbering about the $1000 Linux webserver beating the $1000 NT server.
Essentially the only thing the benchmark shows is that almost noone has the sort of bandwidth that either IIS or Apache can put out. Perhaps for some internal solutions, but there if you want blinding fast, you're probably not doing your transactions over HTTP. Just bothering to measure this stuff is completely ridiculous.
I'm sure many of you write "system administrator" on your tax forms rather than "Linux advocate", so keep it in mind that if you're ever faced with a problem that requires sort of throughput, you can solve it with a cluster of NT/IIS boxes. Until you run into that problem, keep doing your job by using Linux/Apache without worry.
--
Re:NT is as stable as Linux (Score:2)
I have done testing which shows that NT is less stable under heavy load (Heavy I/O for extended period, Linux has seen load averages well above 100 for extended periods without problems, NT quite often BSODs when the tests last at these levels last for extended periods) than Linux, even when using some 'beta' Linux drivers for our controllers.
This is not FUD, it is the truth.
It is ignorance, like what you are spreading, that is keeping Microsoft's pockets lined.
Linux wasn't the only thing tuned... (Score:2)
They didn't, however mention the fact that they formatted the fileserving partitions into 4 separate partitions to improve WinNT's performance on the front page, did they?
Although I can accept that Windows NT might possibly be able to beat Linux, the wording of that reveiw doesn't make me particularly confident it was 100% un-biased.
On a completely off-topic note: while i was editing my preferences the number of comments on this story more than doubled, in about 5 minutes. wow.
Re:ality check (Score:2)
I certainly do not routinely see NT boxes performing in such a manner in the real world- and I think it's a very fair question whether even these crazy 4-way 4-ethernet-card monsters would stand up to real world conditions acceptably.
I understand one issue is latency- in other words, if it is faster for NT to serve 200 pages to one place and have another request sitting there for 20 seconds, it does it unhesitatingly to get the numbers measuring higher. Apache apparently is much more willing to pay attention to that one request sitting around getting old, and to balance out the load so that nobody gets too lagged. Of course, this is not being tested for.
This has nothing to do with MS having better people: it is almost entirely due to tradeoffs being made entirely in favor of benchmarks just to get to a place where they can produce numbers like this and have people saying, "I suspect it's just a matter of Microsoft having a greater number of highly qualified people working on the system". Never forget that the benchmarks are by their very nature an exceedingly narrow view of what the job really is. As such, the numbers become meaningless- not only meaningless in the sense of 'I don't care, I'm sick of rebooting the thing', but meaningless in the sense of producing realworld results that measure up to what the benches suggest. It strongly appears that NT servers are capable of flurries of extreme activity, but also lag pockets and serious unreliability issues- in other words, even if the machine has not crashed, your chances of getting guaranteed good response are not that great- the NT server is busy running around serving something it has cached to people in line after you, because doing that increases its benchmarks drastically. This consoles you not
Re:Linux FUD (Score:2)
Weeeelllllllll.... depends on the microkernel.
NT clains to be a microkernel, and from what I've read of the design docs, it kinda sorta is. But it loses on the device driver front, because a bad driver will bring down NT every time.
Professor Moriarty, we shall meet again (Score:2)
You think that the Linux-kernel coders rolls over and play dead? I don't think so...
Re:It means what? (Was:Static page requests, BAH!. (Score:2)
Any Apache developer will tell you that's nothing to brag about.
Some things to keep in mind (Score:5)
These studies do not address price/performance. P/P is one of the most important metrics in making a purchase decision; these studies measured only peak performance. That the prices of the Linux-based and NT configurations tested are not given indicates to me that Microsoft wishes price to be disregarded as a factor in purchasing decisions. To do so would be an irresponsible act for any purchaser. Consider that NT license fees increase dramatically with number of clients, while Linux's price is constant and lower than any NT option.
These studies do not address options such as clustering. Clustering is a common solution to the problem of constant high client load. It may well be a better solution (in P/P and in peak performance terms) than simply boosting processing power with multiple processors. It also has reliability advantages.
These studies are not generalizable to other hardware configurations. While MS will claim that they prove that "NT is faster than Linux" inherently, they do not. The HW configuration was selected for the first Mindcraft study, which has been proven to have been engineered to favor Microsoft. Hence the hardware configuration itself is suspect. An across-the-board comparison on various configurations, with P/P as well as peak performance measured, would be a more reasonable comparison of the virtues of the OSes themselves, and would also highlight particular combinations of HW and SW that are worthy of consideration for purchase.
These studies do not address security. The release version of MS IIS has outstanding security holes, including the recent one disclosed by eEye [eeye.com]. This was a root compromise which took eight days for Microsoft to admit, and two more to fix. Microsoft classically avoids the subject of real-world security, preferring the proven-worthless tactic of security by obscurity. Security, of course, is a major consideration to be made in purchasing.
These studies do not address stability. Stability, like P/P, is an important metric for purchase decisions. It helps one determine how expensive a system will be to maintain -- one that requires regular resetting or reconfiguration in order to keep operating will cost in manpower; one which crashes a lot will cost in downtime. Downtime costs money in an enterprise situation, and hence should inform purchase decisions strongly.
These studies do not address changing real-world needs. A real server system is rarely left serving static Web pages forever. When needs change, performance will likely change as well. Building a system to meet a single, narrow-minded need is likely to lead to a dead end in terms of scalability.
These studies demonstrate nothing about the future. Based on past trends, one can expect the situation for Linux-based OSes to get better and better. The next version of Windows NT will likely offer decreased performance on the same hardware (due to increased resource consumption by the OS itself) whereas future versions of Linux will likely improve performance. Buying heavily into Windows NT leads one to platform lock-in which may damage one's ability to escape the expensive effects of bloat.
In short, I do not believe that MS has demonstrated that there are advantages to purchasing an NT system over a Linux-based system for real-world file and Web service. Wise system administrators, IS/IT managers, and CIOs should stick with the proven security responsiveness, stability, price/performance, and scalability of Unix-based systems, possibly including Linux-based systems, rather than betting the farm on the Johnny-come-lately Windows NT.
Re:microsoft makes great software. I agree (Score:2)
Ok. If you insist.
Replace Microsoft with Linux Torvalds.
Hello? Anyone home inside there pal? He doesn't make the GNU tools, nor Apache, or any of that. He makes the kernel. Go read before you open your mouth and say something stupid.
Linux doesn't run SMPs well does it?
NT isn't designed to be a super computer OS. It's a PC operating system, a general purpose OS. DUH.
It's a PC OS huh? CP/M was too correct? General purpose? Yeah right. I could list things that will not run on NT (win progs) but I won't make you look any dumber. You do good enough. DUH.
slashdot.org doesn't run IIS? uh..it's got nothing to do with microsoft..so what?
If you don't see the implication there you need help...
EBAY, Microsoft, Dell run IIS, and they have much bigger websites than slashdot.
Yeah, I never looked at Ebay, and hate Dell. As for the M$ site, every *every* HTTP request I send their servers returns 'Remote connection reset by peer' in Netscape. Nice server.
Microsoft don't grow engineers on trees, their engineers come fromvarious backgrounds (inlcuding unix). They have enough money to hire the best in the world, and they do.
Yeah, you'd think at a cerain point there's such a thing as ENOUGH money.. And when they hire a programmer, he may be creative, smart, innovative, all that crap. But he is no longer 'pure'. He prolly expects to get paid when he goes out with his wife for his 'service' (dinner, not sex).
You probably have your face stuck up somewhere dark to realise you can't comapre vi or emacs to Office 2000 and complain how large Office is etc. Office does MUCH more, and Microsoft's products simplyfy working, which is more than I can say for Linux/Unix.
All M$ products are overly bloated for one thing, and Office is no exception. Sure it does lots of kewl little things, but hell, I can make a picture that does lots of kewl things with two pencils and some resin (from a tree).
They simplify working by making everyone work the way THEY want them to. Nice company.
Sure, there are you guys out there who don't want things to be simple, you'd rather excercise your brains doing "hard" things like mounting NFS/SMB dirves by typing rather than doing it in a few clicks.
You go ahead and play with your mouse. We know you depend on that little thing. We however know, and will continue to, how to do things without a mouse. Guess who's gonna be using who's programs here?
I prefer to have the OS do as much as it can, while I get on with the real work. If by any chance, I need to do things manually, I go and do it.
Oh, Win does as much as it can. Mostly collecting files it doesn't need, eating your prefs/settings, and if you are really lucky it might eat a partition or two. Nice OS.
And what's your problem? Are you on medication?
Why, you got something good?
MS Write, MS Bob? So what? How about MS Windows, MS Office (Word, Excel, Access, Powerpoint etc), MS Visual Studio, MS J++ (if the best selling javaproduct), MS Exchange, MS SQL Server, MS Internet Explorer, MS IIS, MS COM (the most successful component model in the entire world), MS MTS, MS DTC..all pretty much defacto standards now...and that's only to mention a few.
Those are standards (well, the ones that are in that list) are only because of brute force and M$'s anticompetitive nature. How can you compete with 100 bucks in your pocket when they got a billion they'd just as soon stick up your ass as anything?
Unlike Linux users MS doesn't claim not to make mistakes, infact Gates even showed the video of Win98's BSOD last year, again this year at COMDEX.
So you compare Linux *USERS* to Microsoft's *PROGRAMMERS* eh? You think every Joe who uses Linux is a programmer? I pity you and your world.
That video is something I'd like to see again tho.. always good for a laugh. Although what's the point of reshowing a BSOD, truthfully? Who hasn't seen more than they can possibly count of them already? Kinda redundant if you ask me.
Happy clicking. I'll be off to play around with my Linux box, to change it's basic settings. Like to see NT (95/98/2K) do that. *chuckle*
Re:what makes NT faster? (Score:2)
Consider who you're talking about; I'd consider that while IIS itself probably isn't in the kernel, it access top-secret M$ stuff which is.
also, why can't we do a similar multi-threaded implementation on Linux?
I don't know. It probably is possible; it just hasn't been done yet. I'd consider these tests to be a sign that it needs doing.
Redundant (Score:2)
Re:Linux is not the fastest. No excuses. (Score:2)
As I've mentioned before, I run Windows 2000 Server beta 3 over at WonkoSlice [wonko.com], and it's really, really nice. Granted, as far as stability goes, it is less stable than Linux, although I haven't had a Windows-related crash on my Win2000 box ever since I first booted it up about 4 months ago. But as far as performance, ease-of-use, and speedy setup go, it leaves Linux in the dust. When I first installed Win2000, I did so with zero prior knowledge of how to run a web server or how to configure Win2000. I had my server up and running flawlessly within two hours. When I installed RedHat Linux with no prior Linux experience and only minimal web server experience, it took me days just to get the stupid system running correctly and get all my hardware installed, and by the time I started trying to set the web server up, I had totally screwed the system up and had to fdisk the partition and restart from scratch. Windows was much easier.
--
Wonko the Sane
Better but not quite (Score:3)
Ok, just in case anyone still thinks these tests are worth a shit. I'd like to clearify that this is pure and unadulterated shit. There now that the
childish remarks are through. I'll do some intelligent speaking.
First off, I don't doubt this to be shit from the get go. I'm an MCSE (my work paid for it) and I know the insane amount of system reasources it
takes to run a NT Server alone. Yes, I know how to properly configure an NT Server right down to the streamlining of the registry. Plus, we have
all been through the multiple restarts and memory that applications won't let go of after using it. Not to mention all the swapping and overhead
processing. Don't get me started with IIS 4.0.
There is a new bug found almost on a daily basis that spells doom for these servers. Plus, IIS 4.0 doesn't have near the amount of features and
configuration possiblities as Apache does. Next Apache needs someone who knows it inside and out to configure it. This is due to Apache's
extreme flexiablity.
Say that average joe smith sets up his Apache server and uses
uncommon with big sites where management is broken up. Well, for every request on the document Apache will check with each
per directory. So if this file is accessed 100 times. Apache will check 500 times for the rights to that file.
Because it will check the root to the next directory to the next. And merge the config files it finds along the way. Making Apache check 5 times
per document requested. But, on the up side if you need infinately specific rights to files. This is a god send that can be reduced by placing
commonly requested documents near the root of the (don't fork the directories too much)server. And using as few
is why you should try to place as much configuration as possible in the gobal configuration files and preferrably in the server configuration file. I'll
explain the last part of that last sentence next.
When Apache is looking into what the rights are for a requested file. It checks certain files in certain orders. And within those files it checks it
against the directives in the order they are placed in the config file. Meaning if that same
the most requested file in the directory near the bottom of the config file. It will take longer. Maybe not whole seconds longer. But, enough on
heavy sites to make an impact.
These are just two of the many configuration tips for Apache a person can pick up when they rtfm (Read The Fucking Manual)and even reading
the source.
And all the rest of the way IIS doesn't have as flexiable a rights system. Nor does it handle dynamic pages as well as Apache. Infact IIS 4.0 will
work fine if it isn't that compilcated a site, the pages are static, and the machine is so big it won't ever see a processor load near 100%.
Apache has that complete control rights system. It handles dynamic pages bueatifully. And doesn't freak when heavy loads hit. It will just keeps
chugging away.
As for file serving? I can't say. I'm not anywhere near an expert at samba. But, I do know that my Linux box boots faster, handles heavier loads
better, and memory management is bueatiful. And to make another remark.
RedHat should not be the version of Linux they are pitting against NT. Sorry, this isn't a direct RedHat sucks type deal. It's a use Slackware or
something and so you can minimize the system to do only what it is suppose to do. And recompile everything to be optimized with the systems
hardware. Maybe not even Slackware. Just something streamlined. Redhat is actually a great system for the home user. That's the way they
seem to be heading nowadays. And I applaud them for it. My it's now easy enough for my mother to use it.
Personally, once again you can look at the source of the tests and wonder why the outcome is the same. These companies are heavily dependent
on Microsoft products. And some have been funded by Microsoft. Mindcraft even did there tests in Microsoft's labs. Of course they aren't going
to say anything bad about Microsoft.
The real test should be here is X amount of dollars. Put, together the best system you can. Linux would kick the fucking shit out of MS. For the
amount of the software alone you could put together a Beawolf cluster that would crush any NT Enterprise 4-way SMP box. I know, I tried this
before when installing many NT systems to upgrade a hospitial. Personally, I won't go there if I'm shot. But, that Linux cluster is up to this very
day without a reboot performing critical storage and access control for CAT scan images. On the other hand the NT clusters (if you can call it
true clustering) are constantly having parts of them rebooted.
Whatever, don't believe this stuff. It's just FUD and the media looking for conflict.
Eros -- I know what every file on my box is there for..... Do you?
Re:ality check (Score:2)
It's more likely that Linux is guilty of what you say (certainly, we know that the ip stack of linux is guilty of this). And the performance of linux on SMPs shows how guilty it is of pending other tasks because of it's lack of threading (or use of).
Re:NT and Linux differences. (Score:2)
As far as I know, NT does use a limited microkernel. But unfortunately this microkernel is not the only thing running in supervisor mode. The device drivers, and the GDI and Win32 is also located within the Windows NT Executive as of NT4. Now I have no idea really how this looks with W2k, but my guess is that it is basically the same.
The interesting part is the excuses Microsoft presented to their users when they moved the Win32 into the executive. I've seen 2 different excuses for this
The second issue is worse. IMNSHO what Solomon describes here is a direct design flaw in NT. The fact that an error in the GUI can make services go down is not acceptable in a system your company depends on. In addition to that, when the userlevel GUI would crash, the machine would die, but most likeliy without more trouble. When the kernelmode edition crash, it might do su by writing outside of it's own memory pool, and therefore might destroy data in the filesystem, etc.
The above paper was found on Microsofts webpage sometime during this spring (it is from April 1996), but I was unable to find it quickly today. Since Microsoft had invalidated the old URL already.
Re:I have an idea (Score:2)
Microsoft is trying very hard to say "NT is better! see! see!". But I can take that claim apart by simply asking any NT administrator how many times they had to reboot their "4,196 hits/minute" NT box, compared with the measily 1800 linux put out...
Linux is more reliable, and has greater flexibility (courtesty of the unix philosophy of piping, and making everything modular). No benchmark can, or will, ever convince me that NT is more stable than linux, or more flexible. Maybe NT is faster at some things - whatever parameters were used for the benchmark obviously bear that out.
But I'll ask you all one question: Where do you think linux will be in one year from now? Think it would beat w2k?
That's the ultimate question.. Microsoft may have a performance advantage (gasp!) right now.. but we all know how quickly open source moves forward, and how quickly bugs are fixed. Even Microsoft can't beat the distributed efforts of tens of thousands of developers working in concert. No corporation on the planet can.
--
Oh yeah? My Experience... (Score:2)
By contrast, we have a Linux box running our very active intranet web site. We've had it up for 6 months and it has run flawlessly. Interestingly, I set up the Linux web server in the first place because I was tired of IIS failing for no apparent reason (the site had been hosted by IIS).
Oh, and the Linux box is an old P166 with 16MB RAM, the NT server is a brand new Dell Poweredge 2300 dual PII 350 with 128MB RAM, hardware RAID 5, 3 hot-swappable 9 GB cheetahs. All that reliability hardware wasted on an OS that can't stay up for two weeks!
It's certainly not for technical reasons that people choose NT over *nix.
Chris
simpkins@tilc.com
(The links to simpkins.org don't work - I'm moving to a new server.)