Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

ZD Critiques Mindcraft Benchmarks 141

SFraser sent us a link to a pretty decent critique of Mindcraft's benchmark which claimed NT was faster. Monday's story about the tests here on Slashdot about the report was one of the most active Slashdot stories ever.
This discussion has been archived. No new comments can be posted.

ZD Critiques Mindcraft Benchmarks

Comments Filter:
  • by Anonymous Coward
    Does Linux hava a performance analysis toolkit,
    along the lines of SymbEL/Virtual Adrian? For
    those who havent seen these, they're free (and
    open-source, mostly) kits for analysing performance problems on Solaris, and suggesting
    fixes (mainly kernel parameter tweaks). See:
    http://www.sunworld.com/swol-02-1999/swol-02-per f.html

    I do mean 'analysis' not 'collection' like
    sar etc...

    I only ask 'cos I've not tried tuning Linux
    (though I have done it on Solaris, obviously)
    and searching on freshmeat for 'performance',
    'tune' or 'tuning', only 'hdparm' seems relevant.

    SymbEL is pretty nice, though obviously tied
    to the Solaris kernel. A framework like this
    for plugging in perfomance advice for servers,
    in general, would be a mighty fine contribution
    to the Linux cause.

    - Baz
  • I started putting one together last night in my fury at what had happened (wasn't aimed at anybody more disapointment and resentment that it might be true).

    A pre-alpha can be found at http://www.upsu.plym.ac.uk/~betty/LinuxEnterprise
    (please don't all hit unless you have something to contribute, until I have mirror at
    http://stacy.flwireless.net/atrevena/LinuxEnterp rise which will be in a few days)

    It is a bit sparse, but it is started and I will be happy to fold it into any other relevent project if it falls behind - I will have a US mirror for it in a couple of days but in the meantime -please be gentle as it.

    Submissions to betty@area51.upsu.plym.ac.uk

    I was also thinking of starting a Issues Database, using MySQL and mod_perl. It could of course be distributed according to Distie, App, Hardware, etc but with a round-robin dist front end it would be really handy and def. give the majors something to worry about without losing the hacker way.

    Aaron (TheJackal - wtf is my cookie? wtf is my password?!)
  • by Anonymous Coward
    Well, just think about this:

    When you are tunning a system and you are changing
    a configuration flag, you usualy try to measure the impact of every change you do.

    For instance, if you were going to set "WideLinks"
    flag to a different value, you would benchmark the
    system before changing the flag and do another
    benchmark after changing it.

    Only this way, you would know what was the impact of setting each specific flag.

    I'm just a simple computer user. However, these persons are professionals runnig a test lab. They should know exactly what they were doing and
    work strictly using scientific procedures.

    If they were doing a serious benchmark, they
    should have measured the impact of every tunnig flag.

    After seeing that changing a specific flag would lower performance, they would set the flag again to the defualt value.

    However, they changed a lot of configuration
    flags that lowered the performance of the Linux
    system. Didn't they noticed the slowdown efect ?
    I don't think so.

    If they used only "default settings", then I would
    beleive that they didn't tune well the Linux system because they weren't enoungh competent.

    However, they seem to have changed a lot of
    default setting and, each time, the results
    were worst than the defualts.

    How can this be possible ?

    Guess what was my Conclusion !
  • by Anonymous Coward
    Any real world cmpany who is going to run a Linux based server is going to have a sysadmin with Linux experience. Mindcraft basically admitted that no one conducting the tests knew dick about linux sysadminning just by the moaning about lack of documentation, most of which even someone with even a few months of Linux experience would be able to handle without having to RTFM... for that matter, there can't be too many Linux sysadmins who don't have a continually growing library of O'Reilly-type manuals. If they had really wanted to be fair, they'd have hired a outside consultant to set it up (something that real-world sysadmins do on major projects when they get in over their heads)
  • Does anyone still beleive that this benchmark
    is the result of a poorly tuned Linux system ?

    Personaly, I beleive these mis-configurations were deliberately done, by people that knew exactly
    what they were doing.

    They even choosed the right hardware (4 ethernet cards), because they were aware that Linux would try to route all the packets trough a single network card.

    They don't even show the routing configuration and the subnets used in the test.

    Was Windows NT using TCP/IP or NetBEUI ?

    This way, this benchmark only proofs that NT with
    4 network cards is faster than Linux configured
    to use a single Network card.

    This might be the reason why the Linux system has peak performance at nearly 100 Mbits/s: the limit for a single network card.

    From the list of processes running before the tests were done, we can see that there was no "httpd" and no "smbd"/"nmbd".
    This leads to the conclusion that both services were launched by inetd.
    This is a very very slow option: each time the server receives a new connection the system will launch a new process.
    The defult setting of red hat Linux for apache
    and samba are standalone servers: this is much much faster.
    Using inetd will realy realy HURT Linux performance.

    The settings in the apache flags "KeepAlive" and "MinSpareServers" are consistent with the fact of
    having apache and samba launched by inetd.
    I just cannot call this a tunning. These settings are much worst than the default settings for both services.

    The WideLinks flag was also a very nasty setting: Windows users cannot create Links and this falg wasn't necessary. It you want safety you can chroot the service. Even if you have several shares, you can mount all the shares in the same subdirectory tree and use "chroot" to that subdir.

    Probably without this misconfigurations, NT wasn't
    even fast enough ( 4 network cards versus 1 wasn't enough).

    Finally, I don't think that this poor results have anything to do with kernel SMP performance.

    This is just what I think.

    An. Coward.
  • by Anonymous Coward on Thursday April 15, 1999 @11:17AM (#1932645)
    This isn't the first time Mindcraft has been questioned about how they "test" file server performance. Check out what Novell http://www.novell.com/advantage/nw5/nw5-mindcraftc heck.html> said about a previous study.
  • Yes, but it was much more rudimentary SMP support. The entire SMP system got a rewrite sometime in the 2.1 series, so it's essentially new. They are still tweaking and adjusting things for more than two processor SMP systems.
  • If someone were to give me a pair of those Xeon servers, I'll come up with some legitimate Linux vs NT Benchmarks :-)
  • I've been exchanging email with another person there about the study...there are a number of people that are turning their heads at ZD. The Labs people seem to be Linux Hip, it is the mainstream publishing, which as always, is slow to move.

    ttyl
    Farrell
  • One thing that I had been discussing with a representative of D.H. Brown was the need for good testing of high-end Linux configurations vs. equivalent NT configurations. One thing that frustrated them, and led to their downgrading of Linux at the high end, was the lack of good objective data to work with (D.H. Brown does not do testing, they do executive analysis, i.e., they read the tests already published so that executives don't have to).

    I can't say more along those lines, but:
    My first instinct, upon seeing Mindcraft's report, was "Wow, that Dell sucks! I wonder how much they'd charge to benchmark our Linux Hardware Solutions quad-Xeon against that box? We'd SLAUGHTER Dell!"
    But then the realization struck: If these people showed such a lack of professionalism as they did in this report, would we WANT to pay them money? I.e., why in the world would we pay people who have already demonstrated that they are a laughingstock by releasing a report such as this which can be viewed, at best, as unprofessional, and at worst, as an amateurish attempt at dissing Linux?
    Needless to say, any testing done in the future on our behalf will be done by someone OTHER than Mindcraft.

    -- Eric
  • Agree, DH Brown was basically accurate. The only thing that I disagree with was their wording on the state of Linux SMP. From talking with one of their people, it appears that what they meant was that Linux SMP was still immature and unproven and thus they would not yet bet an enterprise on it, but it came out unduly harsh in the paper.

    My only real complaint is that they said NT is enterprise-ready, when it so obviously isn't!

    On the other hand, Mindcraft "study" was either the most inept piece of work that I ever saw, or a deliberate hack job. (And if it's a deliberate hack job, it's the most inept hack job I've ever seen!). The best thing that Mindcraft could do to salvage their reputation would be to withdraw the report "for further study" until its deficiencies are resolved, but I frankly suspect it'll be a cold day in hell first -- they've been paid for it already.

    Of course, they could surprise me!

    -- Eric
  • Hmm, going to www.apache.org, klicking documentation and then "General Performance Hints" is not what i call hard to find.

    ever tried ms knowledgebase?
  • http://slashdot.org/articles/99/04/14/0042212.shtm l [slashdot.org] has the full story (with 780 comments.)

    -Ben

  • Here are some other responses to said report:

    http://lwn.net/1999/features/MindCraft1.0.phtml [lwn.net] -- Linux Weekly News [lwn.net]
    http://www.linux-hw.com/~eric/mindcraft.html [linux-hw.com] -- Linux Hardware Solutions [linux-hw.com]

    -Ben
  • Am I confused or are the ZDNet boys still refusing to
    acknowledge that SMP support is *not* a new feature
    in kernel 2.2.x -- if I recall SMP support was
    part of kernel 2.0.36 on the Red Hat CD -- you
    just need to install the kernel sources from the
    CD and recompile.
  • Mindcraft is not only an NT-biased shop it also
    has unlimited assistance and total attention
    from Microsoft -- certainly not the kind of
    attention a typical NT admin would get from
    Microsoft.

    ...and still they didn't bother hiring a Linux
    guru to help them -- so in that respect Linux
    did pretty good considering it was not
    configured well *and* was setup by newbies.
  • The link you put there is just another copy
    of the ZDNet article on the excite.com web site.
  • 1. A copy of the Kernel recompile How-To

    2. Some typical Apache httpd.conf settings for
    enterprise web sites.

    3. A detailed list of which Apache modules can
    be disabled on a plain vanilla web site

    4. Ethernet driver recompile and tuning guides.

    5. Which network services you can disable on
    a plain vanilla Linux web server

    6. Web server tuning tips regarding file I/O
    and how NFS, RAID, SCSI, and multiple disks
    affect file I/O performance.

    7. Real world perfromance testimonials.

    8. Slashdt style Q&A forums

    9. Ethernet and network topology advice

    10. Explanation of DNS tricks such as round-robin
    and gateway switching.
  • At least it's mainstream press on the tests. This is what was needed to put the mindcraft thing in prepsective. Red Hat or VA Research or Penguin Computing or Linux Hardware Solutions or some group at that level should tune-up a Linux box and then let the NT tuned-up quad go after it w/ the same testing, just to see what all happens. Then the results should be made public. This isn't just some adhoc(sp) type of thing. This should be a concerted effort on the part of some or all of the commercial Linux resellers. After all, guys, this report is aimed right at you.
  • Actually, ZDNet was being more than fair on this one. IIRC, this was the article that didn't publish the NT results *because* they were so bad.

    Also, I don't see how this could possibly be unfair to NT. When NT4SP1 was released, hardware like this didn't exist at all. I (later, years after it came out) installed it on a P133, with 32MB of RAM, and a 2GB HD, roughly half of the system you're talking about. Although it was a little sluggish, (and far slower than DOS or Linux on the same machine) it worked, and it wasn't unbearably slow. It was surprisingly bloated in its memory requirements compared to either DOS or Linux, but it still didn't have to swap.

    When they first were releasing NT, the standard testing practice was to get the fastest 486 you could find, load it up with RAM, and... see how many copies of solitare the system could handle. I think Microsoft was relying completely on Moore's law here...

    Now, one would think that NT4SP2, 3, and 4, as the name "Service Pack" suggests, would be minor, free upgrades and bugfixes? Otherwise, they would have released an NT version 4.1 or something? Nope. I'm currently running NT4SP3+ActiveDesktop on that same machine, and it's *incredibly* slow. The RAM requirements have almost doubled. However, the system requirements for NT 4.0 haven't changed, have they?

    If you're going to blame anyone for a Server OS not running well on that hardware, blame Microsoft. They aren't obeying their own released specs, and they aren't serving their users. If NT4SP1 was slower than it needed to be, then why aren't later versions faster? Who would pay for an upgrade that doesn't fix the real problems? Maybe that's what "Service Pack" means.

    Maybe that ZD article should have been entitled "Linux kernel upgrades actually add features *and* increase performance, unlike NT upgrades." Personally, I prefer the title "Windows turns fast computers into toasters that crash and burn toast", but maybe that's just me.
  • And where, if you where an NT newbie, would you get the data on using regedit to change the default networking settings?
  • Well, I just did a simple search on www.samba.org on 'speed' and found several links to exactly what they shoud have done for these tests. It's obviouse they never tried.

    I've found the same sort of resources without much effort all over the net, including at apache's web page itself.

    And as far as the microsoft knowledgebase, I DO use it myself, but it doesn't simply return what your looking for.. It returns several hundred possibilities, which you must then 'sift' thru. nearly the same queries can be run against www.hotbot.com to get the results..
  • Quote: The items that mindcraft used for tuning NT are fairly well established, either from NT documents or documents relating to the NICs, etc. Quote from ZD's article: Mindcraft tested NT with NT tuning, benchmarking and technical support from Microsoft (Nasdaq:MSFT), and Internet Information Server (IIS) 4.0 tuning information from the Standard Performance Evaluation Corp. So, the thing is: They had some official documents, that they knew about, beforehand. They tuned it with these people working with them beforehand. It's more like this: Hi, I want to test my car versus the competitions. I'm going to show you all the tips and tricks about my car. Now, go pose as some customer off of the street and see if you can get them to hot rod your car to be similar. Frequently, when you're going to run benchmarks, you get NO INFORMATION from anyone, or you get EQUAL COOPERATION. None of this: "Well, we'll help you all we can, because we're paying you. Go try and get RedHat to tune your server on their dime, without enough information for them to do it easily."
  • It would be interesting to find out how little hardware would be required for a properly tuned Linux box with Samba and Apache to equal the performance of the Quad Pentium NT server.

    If a single or dual PII with a fast SCSI disk and Linux could equal or beat the performance of the Quad NT machine then both Micro$oft and Mindcraft would be embarassed.
  • I'm in! A well-planned online database of tweaks and tips would be an excellent resource. I'm surprised we haven't done it yet.
  • As the ZDNet article pointed out, tuning performance tips are hard to find for Linux and Apache. This needs to be better documented.

    The items that mindcraft used for tuning NT are fairly well established, either from NT documents or documents relating to the NICs, etc.

    I don't know as if I like RedHat's response, that if the request had been put forth to their PR department they would have responded. What if I'm an enduser and just looking for performance tuning information, rather than some big testing company?

    Does RedHat only offer performance tuning advice to marketing reps?

    That'd be kind of like Ford saying "Well if we'd know they were from Car and Driver, we would have fixed that transmission on their demo unit, instead of stonewalling them."
  • I'm sorry, but ZD's own tests were also ummm, "creative".

    Nobody in their right mind would install as a server a Pentium II 266, 64 Meg RAM with a 4 gig IDE drive. This is the hardware that ZD used in their test.

    This is really unfair to NT, as 64 Megs is the bare minimum at which you can really run IIS. Try the test again with 128 megs, or 256 megs and see what happens. Then try it with SCSI instead of IDE, etc. Why not use a Proliant 800 instead of a desktop machine?

    So as ZD said in this article and Linux scaling up, their other article should have been titled "NT Server doesn't scale down as well as Linux".

  • This is SVN again. He is the only person at ZD that seams to do honest reporting. Some choice quotes ( not from the article but from the author; "I call things as I see them". "My reputation is very important to me". "If the benchmarks were compelling we would have run them".

  • Of course, the other question is-

    Even if in a fair fight on a multi-processor machine NT kicks linux's rear to cleveland and back. The fact remains that Linux has gone from an unknown to a contender in... a year? A year and a half tops?

    The fact also remains that NT is backed by Microsoft and Linux is backed by the rest of us. We're reading articles that are *comparing* NT to Linux and stating that it is a fairly close race in many common instances and even victorious for Linux in the most common of those races. Viva Linux!
  • well, I'm seriously thinking about doing it now. I tend to be the Linux proponent on my project team (I run it at home, and just ordered a machine from Penguin :-).

    I have some bandwidth (not a ton, though... like a few hundred MB a day... not slashdot sized :-) to spare. If people would like a resource like this, just email me at sujal@worldnet.att.net and/or sujal@sujal.net with suggestions, comments, etc.

    If someone else is willing to do this, or has already started, please email me so I can help them rather than doubling up effort. Thanks

    Sujal

  • So, as the article mentioned, there's not a lot of info on tuning linux, but it's easy to find for NT. Why? you ask? I did.

    The only answer I came up with is that Linux actually requires little tuning. Probably 95% of the Linux boxes out there need virtually no tuning. It runs so well out of the box that tuning it is hardly necassary.

    Microsoft, on the other hand, has basically acknowledged that out of the box, NT runs like crap. They've provided lots of information on how to make it run better. I could only see them doing this if people are unhappy with its performance.

    I know I'm more than happy with the performance of my untuned Linux boxes and I'm not sure that spending lots of time tuning them would really make a big difference.

    ** Martin
  • All you have to know is how to do boolean searches against the Dejanews database.

    I often search for the exact error message. 9 out of 10 times it's there, with several Re: replies...
  • OK, first of all there's one point that needs to be addressed. I didn't read through the report and add up the time required to do the benchmark. But to set it up and repeat it enough times to be valid I suspect it would take the three days described in the report. So at best the best responses are several days away.

    That said, I was a tad disappointed with the ZD article. It had no hard figures to prove the idea that Linux is faster. ZD recently did a benchmark test of some sort comparing NT and Linux, but I can't remember if it was web and file serving.

    What I'd really love to see is for Linus to rebut the report at Comdex. More specifically I'd like him to do explain in understandable detail (perhaps referring to a more informative url to move things along better) why Mindcraft's Linux "tuning" was inappropriate. I'd like him to offer up a few bits of documentation explaining the Linux tuning process to deflect the rightly earned lack of docs critique. And before doing the new benchmark results, explain any new patches that help improve the system that were brought to light thanks to the benchmark - it wouldn't hurt to mention that those patches were coded, tested, and put in place within a week.

    Finally, of course, benchmark results demonstrating that Linux bets out NT in this newly conducted test. That would be quite nice, and save Linus the effort of coming up with a speech *and* a topic.
  • DH Brown didn't do a performance test, like Mindcraft, but did a features evaluation. No amount of tweking will change the available features. And DH Brown was mostly correct. Linux doesn't support the larger file sizes or 36 bit memory sizes like NT does. DH Brown pointed out some deficiencies in Linux that are easily overcome and made a point of that. I found the summary to be even-handedand generally truthful.

    Of course, I'm not a hardcore kernel hacker, so I don't know whether what DH Brown says or not is true, but it all sounded factual according to articles I have read previously. I think Linux Journal ran an article on a talk with Intel, Linus, VA Research and RedHat, where Intel points out the lack of 36bit memory addresses and offers to help.
  • It's nice to see ZD doing some anti-fud, I wonder if there will be more. This whole thing smacks of the anti-trust trial video.
    I heard a interesting thing, about technology writers having a more jaundiced eye towards Microsoft after some of the trial shenanigans. It maybe that this is a sign, that technology as a industry, is groing up a bit.
  • Some of the items that have been bandied about is stuff that is obscure and difficult to figure out. For example, I wouldn't expect even a fairly skilled Linux user to know that the 2.2.2 kernel had problems with TCP and windows clients.

    However, anybody even vaguely familiar with the linux versioning system should know always to get the lastest version of the stable kernel. No one getting paid to administer a Linux machine should look at a list of kernels 2.2.1 - 2.2.5 and decide to download 2.2.2 !

    Moreover, some of the mistakes they made (compiling samba with -O instead of -O2, setting the number of servers to 10 initially and 1 spare) show a lack of even basic Unix and apache administration skills. While it may not be fair to expect them to nail every possible optimization, there's no excuse for neglecting basic stuff like getting the latest stable kernel and telling apache to start enough servers. Even if this sort of information does require a fair amount of Linux knowledge, who in their right mind would pay over $25,000 for a machine and not even hire an admin with a basic knowledge of its operation ?

    Even if they didn't hire a competent Linux admin, I'd pit Linux's performance under a newbie admin against NT's performance under a newbie admin. The ZDNet article claims that performance tuning information for apache is much more difficult to find than that for IIS. But, if you go to the apache web site and click on the big link named 'Server Documentation', you'll find links to both general and platform specific performance tips. The section on general performance tips includes *gasp* a couple of paragraphs on the importance of setting the StartServers and MinSpareServers directives appropriately. In contrast, the Microsoft site requires you to navigate through a maze of links to try to find information about IIS. I frankly gave up trying to count the number of links until I could get server performance tips for IIS after ten minutes of wading through MS's marketing classifications (am I an enterprise customer or an IT professional ? which support option do I need ? Why don't many of the forms work in Netscape? etc. etc.). The reason that the ZDNet author believes that the IIS information is easy to find is because he's used to using that information, just like the authors of the study found NT easy to tune b/c they were used to tuning it.

    However, if such a newbie-Linux-vs.-newbie-NT-on-an-enterprise-class- machine-for-loads-greater-than-those-han dled-by-Yahoo study were done, it would be quickly dismissed as patently ridiculous. Mindcraft was smarter than that, so they actually pretended to tune the Linux box, but that doesn't change the fact that they were testing the performance of of an enterprise class system and ridiculously high loads under a clueless admin.

    Lastly, I challenge you to call up the Microsoft Tech Support line, tell them you're running a perfomance benchmark on NT and would like some tuning help. I seriously doubt you'd get anything even resembling a helpful response !

  • Some of this might be to defend the integrity
    of their benchmarking suite, probably most of
    it is that it's a story they feel they ought
    to run.
  • Widelinks is by default set to True, so they must have turned it off deliberately. Either they did this in error, they did it knowing it would hurt Linux's perceived performance or did it suspecting as much (i.e., they didn't test it both ways).

    One of these three possible scenarios must be true, and all of them mean that they're not playing fair.

  • A bloody good point has arisen from all of this. As far as I can tell, there is NO online resource specifically aimed at tuning a Linux system to the hilt, to get that Ultimate Performance.

    I may be wrong about this, but I haven't found one.

    What this site should do, is be a repository for all kinds of Linux tuning - Kernel tuning, Apache Tuning, Oracle tuning - everything that can be tuned should be documented here!

  • yeah, I've found it's generally easier/faster to run a query through Altavista wheneveer I need to find something from MS Knowledge Base...and the Altavista search is usually closer to what I'm looking for than MS's results are.
  • Not that I really pay attention to ZD stuff anymore... but I really appreciate their remarks on this test.

    It is in the true spirit of journalism to shed light on the truth no matter how it affects your own interests.

  • Who would that be?
  • Glad to see a good ZDnet article, there may not be many, but there are some good ones. I think this author is good, I will have to keep watching him. Regardless, two upsetting things I found:

    Red Hat Linux is not available with the 2.2 kernel. First of all, you CAN download RH 5.9, sure it is development, but it is a helluvalot more stable than NT w/o service packs (even with). But that isn't my point. There ARE 2.2 distros available now. I like RH, I use RH, but for goodness sake it isn't now nor will it ever be the only Linux distro, thankfully. Sticking point

    Linux documentation hard to find wheras MS is easy to find. Okay, linux documentation varies a lot and sometimes is hard to find, but have any of you really ever tried to find something on MS's juggernaut of a web site? I was an NT guy about a year ago, but trying to find any REAL answers from their website only resulted in my getting lost in it and my seeing way to much propaganda.

  • It's amazing how some of you people are such fair-weathered fans. You whine about how ZD is just another arm for Microsoft's FUD propaganda. But look now! ZD is challenging the test results, so all of a sudden ZD isn't the bad guy anymore.

    Instead of blindly attacking anything that isn't PRO-Linux, try to open your eyes a little, stop your screaming and actually look at what's going on. It worries me that the "linux community" is becoming more and more a bunch of guys who pat eachother on the backs and fanatically reject anything that isn't PRO-LINUX. Do you really think this is a good direction?

    Also, I would like the suggest that the moderators go re-read the Moderation guidelines. Especially the part about not downgrading a post simply because you don't agree with the viewpoint. Go look over the mindcraft thread and make note that most of the -1 posts are well-written, but take the side of Mindcraft. I thought opensource was about freedom of speech and against censorship?

    Don't be a bunch of hypocrites and censor anything that is not PRO-LINUX.

  • Microsoft invented the combo box? I'll have to mention that to the person who actually did...
  • Micros~1 does this all the time. Tell a untruth and spend ~$100,000 to have the press barf it up to the public. Who is going to put up the ~$100,000 to have the same distribution as the initial untruth? How many companies went out of business because Micros~1 published an untruth about some product they WERE coming out with only to release beta-level code >1year later that didn't even do what was initally published?
    This is it guys and gals. The guns are out and they are aimed at Linux. The last OS that these guns were pointed at was OS/2 and we know where that OS landed. Can you say niche? How does this get stopped? There is no law against this kind of untruth, they would just say there was a misunderstanding or a bug and it will be fixed at a later time. The DAMAGE is done. A later retraction means little.
    I'm sending email to James Love to see what they think about this kind of public deception. A group with some pull needs to address this, NOW.

    mailto://love@cptech.org

    Locutus
  • Microsoft took aim at both of these in the same manner. OS/2 took heavy fire the whole 3 years Chicago/Win95 was in developement and the final death blows came the following year after Windows 95 shipped. No OEM or ISV would touch it. Netware was the next attacked and they have been delt some terrible blows. Not death blows but they are awfully battered. Linux will not survive these attacks if they continue. By SURVIVE I mean ISVs and OEMs will not build drivers or software for the OS. Linux will still exist as an obscurity along with Amiga, OS/2 and others. Windows will dominate.
    Micros~1 has $$$ to buy whom every they please and MindCrap is just the most current one.
    ( I was told that MindCrap may have been used my Micros~1 to "prove" RealNetworks was at fault for breaking the RealPlayer on Windows) Says something if it is true, doesn't it?

    Penguins need missile launchers since snowballs don't cut it now that war has been declared.

    Locutus
  • Didn't you see who sponsored the test? The name is MICROSOFT. I don't care if RedHat gave them enough information to tune Linux to beat a RS/6000, Microsoft either would have had the test result trashed or still would have 'tuned' Linux so it failed.......
    Someone might say,"Oops I guess we shouldn't have used those settings. We wouldn't do that again if we were to run the test tomorrow......"
    Sorry, but you must be a newbie because Microsoft is involved and nothing logical and ethical matters. FUD is the name of the game. FUD kicked OS/2's butt, it kicked the Macs butt, and it damaged Netware pretty good too. This 'test' was to cast DOUBT that Linux can do 'the job'. Bill Gates is spreading UNCERTAINTY everywhere he speaks. FEAR, that is a tough one because it is very difficult to notice someone cowarding in a darkened corner. So a project is killed misteriously by upper management. Who really can say why? Look what Microsoft did to Intel and its Media Labs, tried with Apples QuickTime. Linux is Microsofts threat and history has shown what they do to threats. Linux does need some work but obvious de-tuning of a test system and proclaiming to the world about its defeat is classic Microsoft style.

    Locutus
  • It's nice to see that the media is taking this test for what it is. I was worried for the last couple days about seeing headlines about how Linux sucks compared to NT without any research.

    Now, the article does suggest that Linux has a ways to go with SMP/RAID support, but it does not suggest that it cannot rise to this. I think this benchmark is useful, not as a true representation of the facts, but it does show the community what it has yet to accomplish, and that the goal is well within reach.
  • Actually I set up the Linux Knowledge Base a while back but took it down due to other commitments and lack of bandwidth.

    The code / db is still around so if anyone has a server to spare with some decent bandwidth, let's go.

    The database is still in its infancy but it's a starting point for a project.

    Any takers? Email me at simond@foxlink.net.
  • Well, there is no doubt why NT has better support for quad SMP systems with memory beyond all recognition.

    Simply put, NT on a Pentium 200 with 64MB performs horrendously compared to Linux. Computer manufacturers realized this a long time ago, which is why quad-Xeon systems are being pushed like wildfire for any serious applications.

    So, just in sheer man-hours of development, I bet that NT has more than 4 times the amount of work invested in the required support than Linux does. We can't really blame Linux for this one. I'm quite sure that if the [Linux] developers could all afford quad-Xeons like Microsoft can, the race would be much different.

    Put simply, I'm not at all suprised about these test results. If Mindcraft had any serious grasp of Linux, they would have known that it was at a disadvantage before the tests were run.

    Funny that they didn't do the "Bang for the Buck" comparison. A quad-processor NT Server license ain't cheap.
  • No, reboot, I mean re-read. ;-)

    What I say in the paragraphs, preceding the lines quoted, boils down to that Linux SMP/RAID isn't as mature as NT SMP/RAID. Anyone disagree? I didn't think so.

    As I point out earlier in the article, there's not even a gold driver for the in-test RAID controller. Given that Linux, for this platform, is running in a beta configuration, no matter how well tuned, it would indeed be a 'bear' for Linux to prevail even with tuning. Even so, though, I state, based on my testing and a lot of back and forth to other writers, benchmark writers and Linux experts at PC Week, S@R, Samba and ZD Benchmarks folks, that "Linux's numbers would have been much closer to NT, if not HIGHER, had Linux been as well tuned as NT."

    That's not winning in my book.

    Steven, Senior Technology Editor, Sm@rt Reseller

    PS. Were high-end scaling the be-all and end-all of what you needed in the real-world, everyone should be running AIX. It's not and people aren't.
  • SMP support was in there as of... what 1.3? But until 2.2, it was very rough even by early Linux standards

    From where I sit, SMP really only becomes a practical option with 2.2. There's lot of work still to be done, but it's work's being done by a lot of good Linux kernel folks.

    Steven, Senior Technology Editor, Sm@rt Reseller
  • All I can say is it was there from the first seconds my fingers hit the keyboard to write the story.

    Steven, Senior Technology Editor, Sm@rt Reseller
  • >>Nobody in their right mind would install as a server a Pentium II 266, 64 Meg RAM with a 4 gig IDE drive. This is the hardware that ZD used in their test.
    When Microsoft stops claiming that that's more than enough machine for a NT 4 server, we'll quit reviewing NT on similar platforms.

    Another reason for our choice of platforms is that while we all lust in our heart for quad-pentiums with a gig of memory to call our own, the simple truth is most of us, and most businesses, can't afford them. Yes, CPUs and memory are cheaper than ever, but businesses don't replace their legacy equipment all that often. Our studies show us that a plurality of local LAN business servers out there are less well equipped than our test machines. So it is that one reason we test on low-end systems is that that's what many of our readers still have to deal with it.

    Again, though, if MS just said you need at least X and Y to run NT, we'd use it. It wouldn't be fair to do otherwise. But, they don't, the users don't, so we don't.

    At least with 2000, they're being 'somewhat' more realistic--128MBs and 300MHz minimums. I quote 'somewhat' because I suspect that those minimums will prove to be bare-bone minimums.

    Steven, Senior Technology Editor, Sm@rt Reseller
  • I agree we need a Linux tuning resource, but I think that is a little outside of /.'s bearing. I'm just a lurker who occasionally posts, but I agree a Web sight with a full time maintainer would be a good thing.

    Anyone been dying to set up a Linux web sight? I'm sure if the person who sets it up makes themself available for email suggestions and has a web based message board, they could collect most of their needed information rather effortlessly. Either that or Caldera or Red Hat need to set up a web sight.

    This might also make a good topic of an O'Reilly book. It could have a song bird on the front.

    Any way, my $.02 minus sales tax. I hate Apr 15th.
  • Way to go.

    Please mention Sys V IPC tuning too if you could. I'm working on a project now which would require a bit of IPC tuning in the kernel and for me it is a sinch, but for others that will evetually have to install the system themselves, this would be a stumbling block. Many newbies do not yet understand the art of recompiling their kernel, and don't unless they absolutely have to. Even some seasoned Unix users shiver when you tell them to recompile their kernel. It's sights like this which I think will help people adjust to the administrative culture shock.
  • somebody should create a single domain dedicated to performance tips and enhancements for linux
    im sure that would get lots of hits.
  • Whether you like Linux or NT, the point that is clear throughout the original mindcraft story and the ZDNet story is that objective tests on identical hardware are needed. Definitive information on this front will be good for both platforms advancement as well as proving or disproving the ancedotal evidence of Linux being faster/NT being slower or the other way around. Perhaps Linux is faster than NT on lower-end boxes. Perhaps NT is faster than Linux on higher end systems. ZDNet says they have tested the low-end, Mindcraft states they have tested the higher-end. What we need is for one independent body to test the entire range of hardware. We are also just talking about intel. Let's throw Alpha into the mix as well since they both run on that. No matter what though, give developers in both the Linux community and at Microsoft credit where it is due. They both have done things well, and they both have lots of areas to improve their products. Both platforms will be used for the forseable future. Learn the strengths and weaknesses of both, and build yourself the best overall solution. Blind adherance to anything is never good.
  • I don't think RedHat was ever given the ball to begin with. Mindcraft seems to have made only a token attempt at finding Linux tuning information.

    As I recall from the original story discussion, there was a single posting, from a private individual, requesting information. That request never even mentioned the fact that the comparison was to result in publication. Had they made the disclaimer, and admitted to being an 'independent' test lab - the Linux community would have surely innundated them with helpful suggestions.

    In effect, they shouted a question off a mountaintop and never stuck around long enough to hear the echo.

    I'm sure that the querry they submitted to RedHat was equally low-key; probably to accentuate the documentation deficit that Linux suffers.
    I imagine is went something like this:
    Mindcraft: "Hi, I'm Joe Shmoe. I just bought Linux and I want to fine tune it."
    RedHat: "We'd be glad to help, but you need to be more specific."
    Mindcraft: "Well, you're not at all helpful, are you?" CLICK!

    Gee, RedHat wasn't very helpful, were they? :)
  • I would like to see a knowledgebase implemented and I'd like to contribute to it.
    As would I. What would it take?
    I've written minimal documentation for some of my programming efforts, but it was corporate work, and so the docs were not much more readable than the code by the time the reviewers got done.
    As for making something searchable, context sensitive, intuitive... Well, I know nothing about natural language querry processing, but I do like how helpful the Office 97 help system is. It's not concise, and it assumes ignorance, but it is pretty smart.

    Who in freecode land knows how to structure such an effort?
  • It's sad that Linux didn't get the reresentation it needed to kick m$'s ass. We all know linux/unix can pound the hell out of NT, but only if it's used right. Linux can easily hold 200+ users on a single server at the same time while I know from experience NT can only handle at the most 50. NT server works alright as an intranet server...at least better than Novell. I have tweaked many NT servers before and one thing remains the same, it's not stable enough to act as a web server. It runs alright at 35-40 users...but if your website only has that many users you're in deep trouble. I always suggest switching to a BSD or Linux server when someone asks me about a website.
  • The only thing they didn't point out is that Microsoft paid for the whole thing.

    Actually, they DO mention this. At the very beginning, in bold type even, they call this a "Microsoft Commisioned Study".

  • That article was nearly as much of a Linux suck-up as the Mindcraft study was a MS one. While it's nice to read, I would never call it objective. So before we pour the champagne, let's start talking about how we're going to smoke that nasty ol' SMP/RAID NT system in a fair test!
  • ...what matters is whether it's good science or bad science. Someone's gotta pay for it. While it is quite likely that MS commissioned Mindcraft to perform bad science (they have shown this tendancy in the past), it was certainly within their power instead to commission a fair study.

    I'm all for applying critical thinking and considering the motivation of the speaker. I'm against automatically assuming a claim is false just because someone we dislike made it. I've seen plenty of usenet flamewars in which entirely true study findings were discounted solely on the basis of who paid for them. That's just closed-minded.

    On the other hand, specific point-by-point rebuttals are right-on. Producing evidence that refutes the claim based on better science in golden.

    I guess it's just easier to be a mindless naysayer and pretend to be a sophisticated skeptic than it is to approach disagreeable ideas with an open mind and use logic and rationality to pick them apart, and risk having to admit being wrong.
  • by cesarb ( 14478 )
    Since you're against Linux, what you said is FUD. I'm gonna send a bunch of crazy Linux zealots and Linus minions to deploy a nuclear warhead on you.

    While I agree that linux users do sometimes overreact, I think that almost everything that says linux don't work is also a kind of reaction - 'If it doesn't fit what I currently believe as right it is wrong and I should by all means try to destroy it, but after the Media says it's right I will rethink and say to all, "forget what I said before"'.
  • On the contrary. Most of the posts are merely pointing out the fact that Microsoft sponsered it.

    It is standard practice in evaluating reports to make note of the entity responsible for commisioning it in order to consider possible bias.
  • It was hardly all pro-Linux. He acknowledged that support is hard to find. He also said that Linux might not be able to beat the NT machine even in a fair test with Linux tuned. It was mostly Mindcraft-bashing, not Linux-loving.
  • It's much better to have a single maintainer for any large piece of documentation. Whether this documentation is a single chapter in a HOWTO, with a TOC sent up to the toc maintainer, or an entire howto. This promotes consistent language and useability.

    While code yells at you if you aren't syntactically correct, documentation doesn't, or at least not as fast. Anyone who reads code to modify it, will have a much greater chance of understanding it, even if poorly written/documented, than a newbie who is trying to learn something (a howto reader).

  • It's pretty hard to be an NT newbie for more than about a week unless you've never seen an MS OS in your life. What you see is all you get, kind of thing - click on Start, Programs, and everything under that is everything you can do. Spend a week playing with that and there's not much else to learn. There is some command line stuff, but I don't know of any that does anything other than (poorly) emulate similar Unix commands or maintain backwards compatibility with LAN Manager.

    But to answer your question,
    1. Windows NT Resource Kit
    2. Microsoft Technet
    3. Microsoft Knowledge Base
    in that order. Just about everything I know about NT has come from those sources. When I read a book about NT, I find it is generally a dumbed-down how-to derived from those sources.

    As an example to address your question (and not a great one, since it's for an NT client - Win 95 - and not NT itself), I remember the first time I had occasion to use those resources, as a phone tech in 1995. Somebody couldn't log in to their NT server with Windows 95. I searched TechNet for the error message they were getting and found a registry setting that would cause the problem. At the time, I knew NOTHING about networking - all I knew was how to do boolean searches against an MS database.

    So yes, I agree that it's a lot easier for the inexperienced (and the stupid) to optimize Windows NT than Linux. Kind of like saying it's easier for a 10-year-old to make a paper airplane than a Boeing 777 :)
  • Mindcraft finds themselves with their reputation intact: "We do what we're paid to, thanks for the check, look forward to doing business with you again sometime, Mr. Gates!" If they had any other reputation, they'd go out of business rather rapidly, doncha think? Good parasites do what they can to protect their hosts.
  • Call me a conspiracy theorist if you like, but this has been one of the most cleverly orchestrated bits of FUD I have seen in a long time. Look how it worked...

    1. Microsoft hired a known-biased "independent" lab to produce some obviously skewed results. If corporate America believes this, that's a bonus
    2. The lab does tests on hardware for which there is not solid evidence that Linux is better.
    3. The results are roundly drubbed by anyone with half a brain. The labs general conclusions are shown to be invalid.
    4. The press examines the data and announces that Linux does not scale well and is not well-supported.

    Microsoft did a masterful job of creating the perception of shortcomings in Linux.
  • The only thing they didn't point out is that Microsoft paid for the whole thing. Everything else they pretty much nailed.

    So, all you gurus out there, it seems there's a definite need for a "Highend-Server-Tuning-HOWTO". Any takers?

    Also, I'd love to see the same test run on NT, a properly tuned Linux, Solaris 7 (which is supposed to be Much improved over 2.6 on x86 hardware), and Netware to see how many OSes beat NT, when Mindcraft claims the other way around. Unfortunately, I don't have the money for a quad-Xeon box or Netware, or I'd make my website useful for once ;-)
  • I loved the RedHat comment that if Mindcraft had called RedHat PR, they would have had the tuning staff.

    I have a great idea. Let's benchmark installing NT 4 against Linux. What we'll do is arrange, in advance, for some Linux gurus to show up to install Linux on our test box.

    Then, we'll shoot a call in to Microsoft front-line tech support and ask them to send over an NT guru to install NT 4 for us.

    I bet the MS guy will laugh about as hard as the RedHat tech support guy laughed when Mindcraft made the same request of him.

    Then we'll publish results that the NT product wouldn't even jump out of the shrinkwrap on it's own, but the Linux installation went flawlessly in 30 minutes.

    Heh heh, I really wish someone would do it, and publish it.
  • It's so simple -- at least on the face of it. Mindcraft Inc., in a new study commissioned by Microsoft Corp., found that "Microsoft Windows NT Server 4.0 is 2.5 times faster than Linux as a File Server and 3.7 times faster as a Web Server."

    So how much did Bill pay Mindcraft, Inc. to change the outcome in their favor?

    Did they also mention that NT crashes faster than Linux? Funny - I have a friend that works as tech support at an ISP, they have 4 FreeBSD servers, 6 Linux servers, and about 20 NT servers. Each of the FreeBSD servers holds 300+ users and works like a charm, the Linux boxes hold about 200 users with no problems... NT? Holds about 50 before it starts to crawl... So the company has to have 4 times as many NT servers to accomodate those users who insist on using it. NT is not ready for primetime...
  • I really feel a need to respond to this comment. Can you tell me truly that you know how to optomize an NT install as well as they did? It's all fair and good to say 'Well Linux is hard to tune' but replace Linux with NT and the last statement holds true. They were registry hacking performance up. They weren't just tuning, they were fairly good NT guru's. I just hope that after all this noone starts to think 'Hey, all the Linux geeks are complaining about the tweaks, does that mean that since NT was tweaked and Linux was'nt that NT is easier to use?'
  • Actually, this howto would probably be best written by tons of people, in a pseudo-opensource way. One person isn't going to know all of the tweaks, but if someone (takers?) sets up a website where tweaks can be submitted, it would be much better.

    Now that I think about it, this is the best way to document _all_ of linux. An organized site where anyone can write in a problem they've had, as well as a solution. As the number of users grows, more and more people can write in problems and fixes, until finally anyone looking for an answer can search and find an answer. Does something like this exist yet?
  • ZDNet got much better numbers from Samba recently:

    ZD's PC Week [zdnet.com] says a similar configuration got them 150Mb/s - 160 Mb/s, depending on the client load. This is still not 217 Mb/s, but it does indicate a poor configuration of samba in mindcraft's test.


  • You know, I'm believing less and less that the bogus court demonstration video was a simple "accident".

    OTOH, I'm believing more and more that MS has perfected the use of deniable "mistakes" that just happen to make big initial splashes in the the press, but are quietly found to be wrong later.
  • It reminds me a little of the legal system these days. With the right fee, it's not hard to find an highly-credentialed expert witness who just happends to agree with your side.

    Amazingly, the other side also has an expert witness with great credentials who (surprise!) disagrees with everything your witness says.
  • Come on. How many Linux newbies are going to be
    running a quad Xenon system with 4GB of memory. Complaining that a newbie can't find the tuning info for it is gratuitous.
  • Linux out performs Win95 on my 486 uni-processor.
    I have been tweaking MS-DOS/Win3/Win95 for over
    a decade, over 5 years on this box. I have been
    tweaking Linux for less than 6 months on the same
    hardware. Get a clue, Mindcraft.
  • I think the author said this mostly as a challenge to get the ball rolling on making Linux more scalable to SMP and >1 GB RAM systems. This is already in the works, and I congratulate the core Kernel developers on the fantastic growth we've seen in this area. I believe that once Linux SMP is fully optimized, it will truly kick the pants off of NT.

    Keep in mind that the server used was so high-end that most Linux users would consider it a waste of money. The point is, Linux doesn't *need* high-end equipment to get high performance. NT does. In fact, you could probably run the same test with 3 of the processors removed and only 64 MB of memory and the Linux figures would stay the same while NT would be crippled. Thats my opinion at least.

    -- UOZaphod
  • It's this kind of comment which makes me despair. The sort of clueless optimism that did so much to damage Java in its early days, untempered by anything vaguely connected to reality.

    Tuning is about optimising your machine/code for the conditions that it's running in. Nobody runs it under similar conditions, therefore everyone needs different tweeks. It depends upon your load, the types of request, how much memory you have, your bandwidth, back end CGI processes, etc, bleeding etc. And don't tell me that Linux is running fast enough as it is, next you'll be telling me that nobody would ever need more than 1GB of memory.

    The website that I'm partly responsible for at my (very big) company could always be faster. Such is life.

    Cian
  • Nice to see ZD calling Mindcraft on their, ummm, "creative" testing procedures. And pointing out that ZD's own tests have shown Linux outperforming NT on smaller, more typical servers.
  • My take: We say usenet support works better. Someone tested it. We failed. We need to address that. Usenet sucks anyway - article availability is non-deterministic, and it's relatively slow.

    That said -- I do think that Mindcraft's attempt to gain support was a token attempt only.

    If we want to keep the reputation as being a better source of support (via usenet) than Microsoft, then our community is at fault for failing to answer or to ask for clarification.

    It does not matter that the invididual was a private individual. It does not matter if it was for a private box, commercial box, or for a publication. From what I heard no answer or request for clarification was ever made via usenet. If we're only going to help article writers then we're a lousy source of support.

    Don't justify why we failed, accept the hit and produce a solution. I've thought of the knowledgebase solution but I never thought to mention it and see if anyone else is interested. My bad. I would like to see a knowledgebase implemented and I'd like to contribute to it.

  • I've said it b4 and I'll say it again: If HW vendors refuse to lend any aid to Linux hackers then there's little or no chance of Linux beating M$ in a proprietary environment.

    Put it this way: Given 2 reasonable people and 2 disassembled bicycles in Maine. One person is given the build instructions, the other isn't. I'll let you decide who will get their bike constructed first and ride it all the way to San Jose (or whatever).
  • is avalable at www.linux.org.uk [linux.org.uk]. It talks about the impartiality of the tests, and how companies are starting to go out of their way to criticize Linux.
  • Thank you for setting the record straight. I enjoy reading articles that like that are not sponsored by either side and are fairly objective.
  • ....at least from what I have seen. I consult to a large base of buisness and for the most part that is what I see when I walk into the server room (of course I try to get that changed at least more memory anyway). But from what I have seen in the industry that is fairly the norm.
  • Do you really think they tried that hard at all to find documentation. Any one who has worked with computers for a few years and has an average intellect can buy book from RedHat or O'rielly read it and have their machine rockin in few days. The fact is they had MS backing from the word go and made as little effort as possible to get documentation/help and still be able to say "Oh, we tried to find documentation but is was sooooo hard to find that we gave up." Bull$h1t, RedHat is not stupid enough to say no where not going to help with your benchmark test. But if Mindcraft called into normal technical support and said yeah "I need to optimize this or that" of couse that tech guy is going to say no. If they would have approached RedHats PR department (which I bet anything that they didn't)and said "hey we're doing some benchmarks against NT for international publishing and we need help optimizing linux for the tests." I guarntee that there would have a whole slew of techies pimping out that box in no time flat. This isn't anybodies fault but Mindcarft's.
  • I have to say, this article was well written. It's nice to see a somewhat objective opinion voiced in such a heated debate as this one was.
  • I'm glad to see ZDNet look at these results objectively (well, as objectively as possible, this *IS* ZDNet after all!) and point out the obvious flaws in the testbed design. They do make a very valid point though, high end NT server tweaking info is MUCH more readily available than high end Linux server tweaking info.

    I think the most VALID test bed design would be do this whole test again, but with OUT OF THE BOX configurations of both systems. This would set the constant to compare by. Then have both system's *tweaked* to the max and test them again.

    This would do two things, one it would not create falsified results due to an uneven playing field, and it would also let IT management see what can be gained by hiring talented Linux admins and TRAINING their own admins. The more they know, the better off the company's servers will run!
  • I must say that was a very well written article. It pointed out the M$ sponsoring of the "benchmark," which is a key thing to note about the testing: M$ paid for it all.

    The fact that they showed how it is possible to find the information that Mindcraft claimed it was not able to get proves that Mindcraft really didn't try all that hard to do the optimizations.

    I'm glad that ZD is exposing M$ and it's lackey companies like Mindcraft for who they really are!

  • I do agree whole heartedly that Linux could, properly tuned, beat the living windows out of NT. But, the whole problem is tuning it. I am an admitted Linux newbie and I am having a hard time finding information to tweak and polish my install. But, I am finding the information slowly but surely. At least I am trying to find it. It looks like Mindcraft was in a room by itself and asked if anyone knew how to tune Linux. I think RedHat dropped the ball on this one.

    RB
  • Looks like you need to spend a fair whack to get any decent performance out of NT.
    From the Dell site the config they used comes to $40,617, except Dell only supply them with EDO RAM, so you have to add on the cost diff between 4GB EDO vs 4GB 100MHz ECC SDRAM.
  • > 3.1) Linux certantly DOES perform Syncronous
    > I/O! Otherwise all I/O would stop while one was > in progress.

    Um, think you missed that letter "A" in front of Synchronous - the guy said it *needs* to be ASYNCHRONOUS not synchronous. HTH.

    -- laural
  • Maybe a more sensible question to ask would be
    "What is the best performance which can be gotten
    for a certain amount of money?" or even "What is
    the least amount of money which can be spent to
    get a certain level of performance?"

    How about issues of reliability and expansion,
    for this clustering makes more sense. In that
    a single PSU, m'board, memory will not take down
    the whole system. (Indeed the way NT was set up,
    so as to have each NIC assigned to a specific
    CPU, is part way to being a cluster anyway.)

    Maybe someone should do some tests with Compaq
    harware, after all they bought DEC and DEC had
    their VAX machines clustering at least 13 years
    ago.

    Hold on, though, didn't some people from DEC
    work on NT...

  • NT won because RedHat screwed up by not
    recognizing a need to support Mindcraft's
    efforts and NT won because Linux still has
    a beta RAID driver. The former can be solved
    right away;


    I suspect that the way Mindcraft approached
    asking Red Hat was not disimilar to the way
    they approached the newsgroups for support
    with Apache.

    Note that they did get a followup asking for
    more information (and the output of netstat).

    Did they follow this up? either they didn't or
    dejanews has lost the posts...

Utility is when you have one telephone, luxury is when you have two, opulence is when you have three -- and paradise is when you have none. -- Doug Larson

Working...