Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

NOS Crossroads 177

Mark Wright sent us a link to some benchmarks over at ZD Net that examines assorted NOS Options. NT is benched, as is Solaris, Netware and Linux. Linux holds up quite poorly in this review.
This discussion has been archived. No new comments can be posted.

NOS Crossroads

Comments Filter:
  • by Anonymous Coward
    They should give each NOS a budget.

    Then go out on the open market and bid because the best solution that a company can get is more important then how much some NT expert can tune a Linux box.

    The most stable and fastest solution to meet the budget for each NOS wins. They should also do four seperate runs.

    One looking for "Fast Static web"
    One looking for "Fast application"
    One looking for "Fast dynamic web"
    and one looking for "Fast file services for windows clients"

    In the first I would configure MANY fairly lowend boxes (perhaps PII/333+, 256M ram, 4.5 scsi), and layer 4 load balencing switch, and Zeus or other w3 server designed for speed (WHICH APACHE IS NOT!!).

    For the second, I would spec a dual 21264 system (not any more $$ then Quad Xeon), or a Quad Xeon (only if the app isnt avail for alpha). With a fast disk subsystem. (like a raid 5 of those 7200rpm+ drives)

    For the third I'd spec several lowend boxes (like #1 but fewer) and a heafty backend box like: Single 21264/533, 1gig ram, frontend+1 100mbit nics and I'd put two in each of the frontends and connect the frontends via crossconnect to the backend. I'd also put the disk subsystem from below in it..

    For simple file serving, personally, I'd say get a netapp and be done with it.. Why use a nondedicated server.

    But if you need nondedicated server for file serving, I'd use a single Xeon of the highest MHZ avai, toss in a big lump of ram, and a FAST FAST FAST disk array (Raid 0/1, Yummy!). I'd trade faster disk for CPU upto the point of a 350 PII.

  • I think it would be beneficial to have a cost/value based benchmark where each OS tested is allowed only X number of dollars for purchase of hardware and software. For example, each OS gets $5,000 for software and hardware. If your OS and software costs too much then you get wimpy hardware. If your OS and software is cheap then you get to beef up the hardware a bit more.
    So for a value based comparison you see what really stands out.
  • by Anonymous Coward
    That just several weeks ago ZD net posted benchmarks that showed Linux scorching NT and Netware. Makes me wonder if Microsoft caught wind of that and cut some kind of deal with them.
  • by Anonymous Coward
    How come we never see any NT vs Unix on NFS tests?

    :)

    Clair
    "I drank WHAT????" . . . Sorcrates.
  • by Anonymous Coward
    Any test or benchmark that shows that NT is better than Linux is wrong because:
    a) Microsoft paid for the test
    b) Linux was purposely handicapped
    c) NT was purposely favored
    d) The methodology was wrong

    Since this is what every post here is going to say, why bother posting at all? You people are among the most close-minded and prejudiced I have ever known of. It's no wonder you're not taken seriously.

  • I know every self-respecting Linux Phreak wants to see linux kick ass everytime it's put to the test, but Linux just hasn't matured or "evolved" into a state to accomplish that.

    I think the most obvious victory for Linux has been over looked here. The simple fact that, a year ago, Linux would never have even been considered as a test platform for this article, But today is being evaluated right up next to the Giant NOS's of the industry is a HUGE victory for Linux. And with time, most of not all of the complaints will be remedied.
  • Hee-hee. I just went shopping. I didn't try to cook the number; I simply went looking for a reasonable workgroup server as a test of the statement that Sun equipment would be "many times more expensive than the Wintel setup".

    Here's the answer:

    -- quote from sun.com ------------------------------ 4,495.00 (US$)
    Enterprise 5, 333MHz w/ 2MB ECache, 128MB DRAM, PGX24
    graphics, 9GB disk, 1.44MB floppy, 32X CD-ROM, Solaris
    7 installed and a Server Right-To-Use (RTU) license

    -- quote from dell.com ------------------------------ 5,027.00 (US$)
    Dell PowerEdge 4300, Pentium II 400MHz/512MHz Cache, 128MB
    RAM, 9GB disk, 14/32X SCSI CD-ROM, 1.44MB Floppy, Intel Pro
    100+ Ethernet NIC, Microsoft Windows NT Server 4.0

    So ... you can argue, up/down this option, add this, subtract this, or 'you can get better deals', etc. but clearly it is not many times more expensive to buy a Sparc system.

  • by Anonymous Coward
    If you look SPECWeb http://www.spec.org/osg/web96/results/ - another place Linux doesn't play - you'll notice that Sun actually performs better on x86 hardware than Sparc.
  • by Anonymous Coward on Monday May 10, 1999 @03:31PM (#1898839)
    Gee, so a Windows NT fileserver can save a few milliseconds over a Linux fileserver? Oh, and the quad-Pentium NT webserver that I can't afford can save a few milliseconds serving up static web page content?

    Well, I DON'T CARE, because I don't have to drive to the office at 3:00 am to reboot the Linux server. That's worth a lot more to me than those milliseconds.

    While we're doing these benchmarks, let's quit serving up static web pages and start serving up some CGI-generated content. Watch what happens to NT then, folks.

    Yeah, I'm ranting and I'm hiding behind AC, but I'm also speaking from experience.

    Linux zealots aren't born. NT MAKES THEM!

    ^^ Feel free to use the above as a sig. ^^
  • by Anonymous Coward on Monday May 10, 1999 @03:31PM (#1898840)
    If you follow the link to the PC Week lab notes, you see that Linux outperformed NT when using NT Workstation clients on the SMB tests. So, the real results aren't so bad. As far the NetBench stuff, that is more a test of Apache vs. other Web servers rather than Linux vs. other NOS's. They need to put squid or some other cache in front of Apache; or maybe use Zeus. And really, if you are going to test Enterprise readiness, reliablility and predictability are the most valued attributes of an Enterprise OS. IMHO, reliability should account for at least 65% of any scorecard.
  • by Anonymous Coward on Monday May 10, 1999 @04:06PM (#1898841)
    Loading a new process in NT is slow and memory-intensive. Microsoft's own tech notes admit this (do a search on "CGI" in MS TechNet). Linux is faster at running a new process.

    This, BTW, is one of the reasons Microsoft pushes Active Server Pages and ISAPI. The user code runs in the same process space as IIS (unless you use MTS) and doesn't have to be loaded each time it's called.

    Predictions --
    1. A comparison of CGI-generated content could well show NT IIS getting spanked in terms of pure speed.
    2. Microsoft would challenge the results, saying that the benchmarkers should have been using ASP or ISAPI. They would probably throw in snide remarks about CGI being "old technology".

    Your point is very important. With more sites becoming interactive (esp. the "enterprise" sites which these benchmarks target), static page delivery should be met with a big yawn.
  • "and Linux, which requires far less hardware than the other NOSes and could probably be ported to
    solar-powered calculators,"

    I keep asking myself, when these people
    do benchmarks, why do they use quad cpu
    boxen when they know linux doesn't work
    so hot with em?

    Why not assign each OS a set number of dollars,
    and spend it the best way for the OS.
    NT can get a quad 500mhz pentium 3 box,
    and linux can get a cluster of PII 450 boxes....


  • It pointed out that:
    • Linux + Apache maxes out when it gets 2000 HTTP requests a second
    I would put a great deal of money that this number was, in actuality, closer to 1024, which is the total number of processes Linux 2.2 can handle by default. There are patches floating about that increase this number (significantly? I'm not sure exactly to what - though I believe it's included in the -ac series of patches).

    Why did they report it as 2000? They were probably running a LOT of clients which were connecting and dying off very rapidly, and as someone mentioned above, fork() performance in Linux is stellar. It was probably a case of resolution of benchmarks not being high enough.

    As for the 200 Mbps - I would guess that this has to do with the network adaptors. The capacity to have multiple adaptors for the same interface is available, and was developed as part of the Beowulf project. It's probably going to be integrated into 2.3.

    In any case, these problems are probably all already fixed, but not tested enough (or wide-spread enough) to be included into a stable distribution's (or kernel's) release.

  • Choose 1 general setup & switch between the available Linux webservers to see which ones perform better given different scenarios.
  • by gavinhall ( 33 ) on Monday May 10, 1999 @06:46PM (#1898845)
    Posted by Jeremy Allison - Samba Team:

    > Now, as both Solaris and Linux had nearly
    > identical graphs for the NetBench part, and
    > both were using Samba, I think we know where
    > the bottleneck there is...

    Err, actually no. The bottleneck isn't Samba.

    If you look carefully at the Solaris analysis at page :

    http://www.zdnet.com/pcweek/stories/jumps/0,4270 ,401974,00.html

    You'll find this interesting quote :

    "To isolate the disk subsystem as a bottleneck, we created a
    temporary RAM disk to hold workload files, effectively
    eliminating the need to hit the RAID array for data. In this
    configuration, the powerful capabilities of Solaris 7's networking
    kernel were unleashed--to the tune of 360M bps on NetBench."

    What this means is that when Samba is run on a very tuned SMP
    OS such as Solaris (ignoring the disk subsystem for the moment)
    then Samba can produce numbers that out perform *all* the other
    systems (the peak NT number is around 340 I think). What is killing
    Solaris here is their awful disk system. If they had a decent disk
    file system they would have had beaten NT when using Samba to
    serve Win95 clients as their SMP is so good.

    This corresponds well with the results I get in the SGI labs
    using IRIX, which is also a highly tuned SMP OS (but with a
    better disk file system, XFS :-). I can beat NT comfortably
    using Samba and IRIX on an SMP box, but IRIX only runs on MIPS
    boxes from SGI.

    What this means for Linux is that we need to do more work
    on the SMP scaling in the Linux kernel, as Samba isn't the
    bottleneck here. I'm doing a lot of work on userland caching
    at the moment to help out on the Samba side, but Linux just
    needs a bit more SMP work. Don't worry, it's coming (I know
    *lots* of people working on this)........

    Regards,

    Jeremy Allison,
    Samba Team.
  • Well, Solaris wasn't tested on its home court. I'd like to see the crew versus a SPARC based server of equivalent horsepower.

    tugrul
  • People who write COM objects and ISAPI extensions for NT/IIS spend a lot of time making sure that their objects are perfect

    I would like to meet some of these people, as I am apparently unaquainted with them. :-)

    One of the truly *awful* thing about the whole NT/IIS infrastructure is how ASP stuff can (and does) hang inetinfo.exe. It's not dead, so it still allows a connection; it just doesn't actually do anything at that point.

    --
    Get your fresh, hot kernels right here [kernel.org]!

  • roughly:

    Linux 71
    nt 88
    novell 79
    Solaris 83


    hmmmm...maybe somebody cheated on the mid-term?
  • Um... do you mean NT4SP4, because SP5 does not exist...

    I've never seen a load higher than 1.00 from a process gone awry, and certainly not on a recent distribution. What were you doing?

    And yes, I like benchmarks, especially when they coincide to real-world tasks, unlike this benchmark. (why would I run Solaris on Intel? why would I get four Pentium ]|[/500's with four network cards in one box to serve static pages? I'd have a huge networked game of Quake ]|[, and play it locally as well, of course. Benchmark that. :)
  • Yep. Last time when they did a Netware 5 vs. Linux comparison, here was the server:

    Our server was outfitted with a 266MHz Pentium II processor, 64MB of memory, and a single 4GB IDE disk.

    Bandwidth? 100Mbps. Why not include NT? Probably because it runs like a dog on that hardware. However, if you throw enough money at it, you get the funky Mindcraft configuration benchmark, which is what they did this time.

    I would *love* to see them run a comparison of Linux on reasonable hardware (like the configuration shown above, or a little better) and then everything else. Call it, maybe, servers for under $2000. Even if you ignore the cost of NT Licensing, NT still loses.
  • Apps that distribute themselves across multiple processes shouldn't have any problem taking advantage of extra CPU's. The implication on ZD's part that Unix/Linux software needs to be rewritten to take advantage of more system resources is rather BOGUS.

    Also, there are fully supported RAID controllers. Although, the more conservative thing to do would be to have an external RAID rack to begin with. IT is typically quite conservative after all.
  • Funny, that doesn't keep us from running all of our Solaris intel boxes on RAIDs.

    When you're that paranoid about data integrity, RAID in the server boards aren't an issue as they aren't bothered with.

    Furthermore, the real value of Solaris is Sun's hardware support. Comparing Solaris/x86 to anything is rather silly when lower end Sparc's from Sun overlap with PC workstation prices on the low end and higher end x86 based servers overlap with Sun Enterprise servers on the high end.

    Thus, one wonders what the real intent of this article actually was...
  • You're a damned filthy LIAR. Several of us on cola advocate different systems other than Linux for different requirements.

    We just don't give Windows any respect. For that, you try to claim that all of us always slam any OS !Linux, when it is typically only Windows that universally gets dissed by LinVocates.

    Actually, my first thought when seeing this article was not that they were doing Linux wrong but that they were doing Solaris wrong.
  • e) Wieners that don't have enough sense to test cheap Sun Servers and give Solaris a D in RAID support.

    Also, a Cobalt Cube should have been in the mix and perhaps some other turnkey style servers.
  • Common guys, that D for Solaris in RAID support should have been a dead giveaway that something is screwy about this article.

    Besides, people who care about their data enough to use RAID aren't going to be going through the OS (beyond the SCSI subsystem) to begin with.
  • Seems that every test aimed at "enterprise level solutions" choose a single 4-processor machine with RAID. Setting aside the RAID issue (improved hardware driver availability will remove that problem in the relative near-term), most of the performance issues there could be directly attributable to the fact that non-threaded software is being used, and thus is performing poorly on the multiple CPU benchmarks.

    Perhaps Apache should move to a fully threaded model for version 1.4, or a threaded server (like Roxen [roxen.com]) be tested. I wouldn't mind seeing a webbench comparitive result between apache and roxen on a 4 cpu box (or even a 2 cpu box for that matter).

    --
    rickf@transpect.SPAM-B-GONE.net (remove the SPAM-B-GONE bit)

  • I develop intranet apps for a living and I wonder why they keep testing "Enterprise" webservers only using static pages. Most of the load on any webserver is going to be on generation of dynamic content. I don't Care if you are using CGI, Java serverlets, mod_perl or whatever.

    In most real applications static files will clog the network pipe before it hits the CPU. And as been noted there are some unix webservers that can serve static pages much faster than apache.

    But we do need to document all of this better.
  • They went to length to explain that the bad Solaris performance was solely due to a slow `rename' operation, and that Solaris was `orders of magnitude' faster at everything else.

    They also explained that the good NT performance didn't mean that NT excelled at anything in particular, it just wasn't really bad at any of the operations.

    Apparently, they used a test methodology that emphasized the slowest component.
  • by alany ( 1398 )
    They make linux's flexibility sound like a disadvantage. Maybe for people that lack the money/brains, but to my mind enterprise sized systems would be maintained by people with a clue. Solaris ain't no cinch to setup either.

    Saying that NT is better than Solaris is just plain dishonest. Funny how they attack linux for poor SMP support but then gloss over the huge difference in NT vrs Solaris SMP support.

    Try remote administration of NT boxes you turkies, then tell me unices are hard to maintain remotely.

    The article isn't that bad I guess, once you realise that it is just another marketing driven review.
  • From one of the articles ...

    For example, when testing the performance of the Apache Web server, which comes bundled with most Linux distributions, we noticed a speed degradation while ramping up clients. After careful examination of the code, we found that the problem related to the number of processes that were immediately spawned by Apache. We edited a parameter in Apache's configuration file to compensate.

    Right, "after careful examination of the code" but we forgot to read about the StartServers directive in the manual. Benchmarking people are not going to spend time reading manuals to help linux look good, especially if the commercial products running on the other platforms have nice GUI interfaces for setting these things. How about a "Ready for Benchmark" flag that can only be set if the operator has modified certain things and have a script that can quickly compile a report of all settings that can be published along with the findings?

  • I have to agree.. I don't seehow anyone could fault these tests, 'Spec comparing Solaris and Linux,as they where BOTH running simular software. Linux didn't do all that great here.. What we need to do is figure out WHY and fix 'em..

    If it happens that it wasn't tuned properly, then perhaps the tuning needs to be done by default.
  • Gimme a break. This is a pretty mcuh standard setup for a large scale server.. And if you bothered to read the article, the actually included SOURCE level tweaks on Apache AND the kernel, and several configuration options with Samba. Hell, Solaris used the same release with the same options as the Linux box..

    And if you actually read (Once again, notice a trend?) they rated SMP with a B, the same grade they gave NT.

    There is no FUD here.. These are legit problems with Linux.. It still doesn't scale up well.. It's a gimme, unfortionatly.. Heck, if they moved Samba into the kernel itself, that'd be a 20% increase in performance right there..
  • It's i386 based Solaris, NOT Sparc.. the intel release is hampered by lack of ANY sort of decent PCI support for the raid boards themselves..
  • I think that they worded the point incorrectly.. What they where doing was comparing different operating systems on an x86 based system. I would have to agree that it'd hardly be fair to say that this is 'Linux vs NT vs Solaris vs Netware'. Who knows how Solaris would doon it's own hardware (OK, so we have a CLUE..;-P) vs something like Linux PPC or NT on an Alpha..
  • Yep, weirdly similar setup, which has been proven to work well with NT; and they also echo the "no central repository for performance tuning information". They had to put out this test quickly before this gets outdated :)

    But I don't really care anymore. They are free to do that, and are also free to ignore that no benchmark for Linux will be valid longer than a few days. They were using a much later kernel, 2.2.7-pre-something (exact version not shown, at least where I looked), than the Mindcraft case (look, they try to be honest by using current software); but the kernel development is still in progress to fix all parts of the aching spots after the Mindcraft fiasco. A few things has been done, but still, they must hurry to test it before it's all getting better than their ad-cash-cows...

    As much as I'm ignorant to the BSD stuff (my bad, agreed), it would be really interesting to see how different is it from Linux, i.e. how the Unix architecture is coping in general with these kind of benchmarks on the exact same hardware. It would tremendously help to find the spots where Linux could be easily tuned to match another open-source system.

  • It does look like it tries to be a little more impartial than most benchmarks have been lately [slashdot.org]. But I'm still very suspicious. I think in some cases they might have been comparing apples and oranges. It is really hard to put value behind an untrusted benchmark without all the nitty gritty details on things such as hardware, test methods, optimizations, etc... Oh well.

    P.S. Here ( http://www.zdnet.com/ pcweek/stories/jumps/0,4270,401971,00.html [zdnet.com]) is an interesting quote from the article. Not something that is usually printed about Winbloze configuration. "Working with NT 4.0 did involve working with some parameters so complex that Linux seemed pleasant in comparison."
  • Actually, I have found NT4 sp5 to be relatively stable, and have had major problems with Linux 2.2/glibc 2.1 based distributions spiraling into double-digit loads from a single process gone awry..... although it is Linux, so at least I can fix it if I find the problem :)
  • SP5 IS out. And I was compacting my 96MB linux-kernel folder with kmail.
  • by ChrisRijk ( 1818 ) on Monday May 10, 1999 @04:17PM (#1898870)
    Important things they missed out: Stability/reliabilty, security, availability, interoprability, didn't covert scalability properly...

    There were some other things I thought were kinda strange...I'll concentrate on Solaris here.

    For Solaris they actually used Solaris on Intel, which is fair enough considering they were looking at doing stuff on the same hardware, but isn't that good for 'real world' situations (A comparison with a Sun E450 would have been interesting) because most people who use Solaris use it on Sun hardware. Some things are a bit unclear - they seem to say they got the Solaris box from Sun, even though Sun don't sell Intel based boxes themselves - they get OEMs to do that. (actually, they correct that later, saying that Sun brought in a Dell PowerEdge box) They don't say when they got the box, but they did mention Sun's Project Cascade (think Samba for Solaris) but didn't mention that products for this are now available (well, availability was annouced a few weeks back, though I don't know about x86 versions).

    They gave Solaris (on Intel) a D on RAID due to lack of support for PCI cards (not sure how fair that is) which is kinda funny when Solaris on SPARC has about the best and most reliable RAID setup out there, according to people I've talked to.(NetApps were also highly praised btw) They then criticize Sun for being 'expensive' (the hardware is, sure), when they were not even testing Sun hardware, while Solaris itself is actually very cheap for a commercial OS. (NT is only cheaper than Solaris when your NT box has no clients) They then have contradictory stuff about Solaris - stuck in the datacenter on some pages (the main ones), while on other pages (the Solaris specific ones) they give a different picture...

    Btw, in the final page about Solaris they mention a report from the Standish group, but they don't give a URL to it. It's available here - Solaris Vs NT [standishgroup.com].

  • If you follow the link to the PC Week lab notes

    I couldn't see these lab notes anywhere. Could you post a URL?

  • You'll find this interesting quote : "To isolate the disk subsystem as a bottleneck, we created a temporary RAM disk to hold workload files, effectively eliminating the need to hit the RAID array for data.

    An interesting quote in itself. Is the NetBench data really so small you can hold it in RAM on a 2Gbyte workstation? Seems very unrealistic.

  • Linux is faster at running a new process.

    That may be, but at some point the overhead of starting a new process for every request will kill you anyway. Under Apache you can use PHP [php.net] or mod_perl [apache.org]. With these there is no need to fork or compile a complete perl program for every request.

  • Killing threads is a very bad idea

    On Linux it's no worse (for system stability) than killing any other sort of process. In fact under Linux a thread is very like a process, except that context switching is faster because the threads share the same virtual memory map.

    Therefore (IIRC) it is impossible to kill a thread from another process

    Not true for Linux

  • Has anybody else noticed that the Ziff-Davis Editor's choice awards go the company with the most color full page adds? This info was also by a remarkable coincidence reflected in the benchmarks the ZD has done over the years.

    I'm afraid Linux has a long way to go to catch up to Windows in the EC catagory!
  • Have you ever thought that people don't want to set up every detail of their benchmarks to match the strengths of Linux? There have been so many benchmarks proving that Linux scales down well that it's a fact. Why is it some great crime to prove that Linux blows goats when scaled up?
  • Channel bonding is already implemented in the Beowulf patches to the kernel (somewhere at cesdis.gsfc.nasa.gov). It does round-robin between a number of ethernet adapters.

    How about the SMP scores in the test ? It seems that they rated the systems after how much each CPU was occupied... NT is famous for hot-potato'ing, swapping processes between CPUs for no reason (other than to pollute the L2 cache) what so ever. So it scores high. Linux gets the job done without using that much CPU, so it scores low. What a strange world this is...

    I don't mean to dismiss the results of the benchmark as fake. But there are problems with these benchmarks. Everyone can configure systems to perform in any way, relative to eachother. A benchmark can be made to show anything one wishes to show. We need to see the technicalities behind these tests. It would be great if ZD and any others had links to a technical description of what they did, some page where they wheren't afraid of mentioning words that doesn't rhyme on ``icon'' and ``click''.

    Oh, one last thing: The scorecard says RAID support, and Linux scores low. Well, if it had said Hardware-RAID support, it would probably have been true. But today, with Ingo Molnar's Software-RAID patches, Linux outperforms any hardware-RAID solution for a fraction of the cost.

  • about this review.

    a) RAID controllers IIRC there are some RAID controllers which work beautifully in Linux and others which are in alpha drivers (such as the one MindCraft used...). Does anyone know which ones are which, and where the one ZD used fits in?

    b) Througput for Linux peaked at exactly 200Mbps. Anyone else find that suspicious? As if they only had 2 NICs going in Linux? Why on earth should the kernel choose such a nice round number at which to pan out?

    c) Static Pages this has been mentioned before, but it's very pertinent. The only thing that counts is dynamic content. Anyone know how the Apache mod_asp performs?

    d) Multiprocessing i386 I'm sorry. When you're spending $20k on a computer, you buy a Sun Ultra-60, run Solaris on it, and end the question there. Intel machines suck at high-end multiprocessing. And Linux will kick anyone's ass on a dual box :)
  • When I said the article was not that bad I was talking about the things they said about Linux. I don't believe for a moment that NT could beat solaris, netware and linux. I thought I'd better clear that up.
    --
  • When they say makefiles they're probably not talking about makefiles. It's a bit like when they said freeware they were not really talking about freeware.

    What they were probably doing is going through all the configuration files looking for things which may tweak performance. Perhaps when the did this they made a few mistakes and actually hindered performance. But I've rarely had to mess with makefiles. When compiling the kernel you do a make (x|menu)config and you shouldn't have to alter the makefiles for that, most software comes with a configure script that generates the makefiles for you, etc. The only time I've had to mess with makefiles is when I'm creating them for software that I've written.

    I never noticed that particular hole in the article when reading through it quickly.
    --
  • I don't think the article was really that bad. It acknowledged that RedHat was not the only Linux Distribution (even though that was the only one they tried) and the referred to Linux by its kernel version rather than the version of the distribution. OK it made Linux sound more difficult than it really is but lets put it this way. If you're using Linux as a network operating system you should be paying staff who know what they're doing not people who perhaps go for, lets say, NT just because it's easy for them in the short term (although they have problems later).

    My main annoyance is the use of the word FREEWARE when they mean free software or open source. Freeware refers to anything free of charge - including binary only software. Linux can be freeware in a sense but can also be distributed value-added (i.e. a boxed set distribution with support and printed docs). People who hear that Linux is freeware can then be confused when they see it on sale in a shop.
    --
  • threads are much less problematic than select() actually.
  • I do have an NT system that runs for months at a time. The thing is that box only does DNS resolution.

    The really beefy NT boxes I have that run other services (file, database, web) crash within two days due to memory leaks, illegal instructions, crashed services (see IIS4, MSSQL) -- I can go on and on and on...

  • You don't normally have battery-backed NVRAM in your main memory only ns away. DG produced a NVRAM VME card, but it never really took off. I've got 2 A1000 RAID arrays here and the RAID 5 write performance often outperforms the read performance.
    Once the blocks are written to the NVRAM, the write is done as far as the host is concerned.
    CPU with "software RAID" really takes a hammering when you lose a drive and have to recreate each block on the RAID 5 set. You then take a larger system hit rebuilding your RAID set after replacing the failed drive. With hardware RAID that processing is offloaded to the RAID controller.
  • I'd like to see just how well NT would do serving NFS.

    We're doing this at work, and it sucks rocks. Not in terms of speed, but in terms of actually implementing some semblance of Unix semantics. I don't know which NFS server is being used though; there might be better ones.

  • To a first approximation, no-one uses Unix clients.
  • by David R. Miller ( 4879 ) on Monday May 10, 1999 @04:27PM (#1898887)
    First, Beowulf clusters do not provide high availability to standard data networks. They cannot be used to imporive Linux performance in this application.

    Second, they probably chose multi-processor systems to run the benchmark because multi-processor systems are typically used by IT shops in this role.

    It is no use complaining that they should not use a particular platform configuration just because Linux does not run well on it. Linux must instead be improved so that it can work well on the platform of choice.
  • I read the dead tree edition of this article yesterday (with sidebars spotlighting a user of each NOS, and reasons why they went there), and it's a pretty solid review. On lower-end hardware, Linux blows the doors off NT, but at this point, NT runs faster on the king-size industrial hardware (like the boxes they tested on). Also, ZD tested a RedHat 5.2 distro upgraded to the 2.2 kernel - not a lot of stuff is optimized for the newer kernel. Off RedHat 6.0 or one of the other 2.2-based distros, the numbers would probably be somewhat better.

    That said, it's obvious that the next step for Linux is better "enterprise" hardware support, and easier configuration/tuning for the non-wizard. The configuration issue has been at the top of people's lists for a while, but it's not solved yet (I suspect because so many of the developers can configure from a text file in their sleep). NT does nothing truly well (it's a decent desktop OS, but that's about it), but in a benchmark environment where stability isn't measured, it does nothing too badly. So it scores well, In my experience (YMMV), I've found that when running NT in a pretty vanilla software environment, on Compaq hardware, with only a task or two per box, it's pretty stable (no crashes in day-to day use, reboot to defrag memory every month or so). Of course that's not how Microsoft positions it, or they wouldn't sell the BackOffice suite as a single SKU. When you run all the BackOffice components at once, it's gonna crash, and crash hard & often.

    NetWare, for pure file and print services, is still a really fast engine - NLM's suck hard and it'll be a while before you see NetWare services rewritten in Java, but their Java interpreter is pretty good. They've also worked hard on tuning their web server for performance, and it's integration with NDS is a pretty slick feature. The only thing I wasn't clear on from feading the benchmark specs was what file system they used - their older FAT system (which is real fast if you have the RAM, but pretty risky in a crash) or their new journalling file system, which I don't believe is quite as fast yet.

    As for Sun, this is the first real bench I've seen on their Intel version - hopefully Sun doesn't keep ignoring it for the Sparc version. Solaris, with better hardware support, could be quite a nice NT killer in the server space

    All in all, it was a pretty balanced review that did a good job of highlighting strengths and weaknesses both. It'll be interesting to see how the vendors react.

    By the way, in the same issue PCWeek also reviews Win2K Beta 3. In a nutshell [zdnet.com]: The Workstation version is pretty close and pretty solid - the Server version sucks eggs.
  • Simply put, they only looked at Intel Solaris. For the same price of that quad Xeon box, i'd like to buy an UltraSPARC and then redo the tests. Not only that, but then try to give any of the other OSes anything above a F when grading the scalability, i don't think so. They handicapped the best competetor when they did the tests, and ya wonder why?? who out there would honestly spend $15 grand for an intel box to put solaris on?? no one, that's who.
  • I think we'd run into a problem there. They can setup a beowulf cluster for cheap, but porting the software to be beowulf-aware is a bigger task than they're going to do.
  • I hunted down and found the posting where Linus considers making a kernel change based on an informal benchmark [deja.com]. My memory was a little hazy obviously--I thought Linus said he fixed it, but actually, he just states that fixing it should be easy enough, and that he is considering fixing it.

    Anyone know if this particular problem (which probably only makes Linux look bad on benchmarks, mind you) was fixed?

    - Sam

  • When some benchmarks were done vis-a-vis Linux and NT a year ago, someone found that Linux creates new processes with fork() almost as fast as NT creates new threads. The article in question is in the same thread as this one

    To access threads with Dejanews, username cypherpunks, password cypherpunks.

    - Sam

  • The link is here: Apologies for not having the link in the submitted article (use preview, boys and girls)

    - Sam

  • Articles like this, which show some potential weaknesses with Linux, are excellent guides for the developers to continue refining the already excellent OS that Linux is.

    It pointed out that:

    These kind of benchmarks, although unpleasant to read, have worked to improve Linux in the past. The fact that Apache no longer attempts to perform a slow getaddrbyname (reverse DNS lookup) operation every time someone requests a web page is the result of benchmarks showing NT web servers beating the socks off of Linux web servers that did this inefficient operation.

    The web page tunelinux.com [tunelinux.com] is the result of the much-discussed Mindcraft study.

    Linus fixed a problem with Linux yielding threads when it was shown by an informal benchmark that NT was much faster when yielding threads in a tight loop. Of course, this being a Usenet test, a long flame war started arguing whether the test was legitimate. Linus had the very mature comment that "Anything that could objectively make Linux look bad should be fixed" (or words to that effect).

    My only objection to these ZDNET studies is that they do not always explain in sufficient detail their testing methodology. As long as their story [zdnet.com] explains their testing methodology, these articles should be studied by the developers with a fine tooth comb.

    - Sam

  • ... is MacOS X. What with all the hubbub, it seems to have died out already. Why hasn't it been reviewed anywhere?

    I'll bet we start seeing FreeBSD reviews in the trade press by sometime next year.

    Is anyone else sick of these 4 CPU/4 ethernet/ file/web server on a 100Mb switch full duplex-static html "benchmarks"? Doesn't this seem childish, almost?

    Oh well. It's time to pack it in, take slashdot down and reformat for NT. Linux just can't cut it, ZD Net says so.
  • Wasn't it ZDNet that in the January/December timeframe ran 3 Linux Distros against NT, and found Linux to be a far better performer? Am I the only one who remembers cheering over this one?
    (methinks it was a single CPU test tho)

    Anyone got a link?
  • http://www5.zd net.com/products/stories/reviews/0,4161,387506,00. html [zdnet.com] Not a totally cluefull article, (parts of it are actually quite scary - "Apache for OpenLinux is superior to Apache for RedHat", etc.) but for those suggesting they run single CPU benchmarks to see Linux shine, it's a start. It's funny there's been no mention of this reviewer's findings since.
  • Too many of us in the Linux community are in self denial mode and therefore out of touch with reality. The reality is Samba, Apache and the Linux kernel still need work. Please don't believe this crap about biased benchmarks (although I agree the Mindcraft methods were very questionable), or faulty tuning. This is the second benchmark I've seen with Linux consistantly comming in last place when using Enterprise level benchmarks. There are many areas in Linux that are and will be improved upon. Instead of crying 'foul' we should instead seek to learn from these benchmarks and improve our code.

    From what I've read these past few months, Threads under Linux seem to be somewhat problematic, else the Apache team would be using them. This is another area that will be improved upon in the upcomming months. Instead of sticking our heads in the sand,lets identify what needs work and improve upon it. This is Linux's strength. 3 months from now we can run the benchmarks again and see a drastic improvement, else we can just keep on coding until we get it right. Really it's not a question of 'if' but one of 'when'. We are in control of our own fate, as we have the source :).

  • NT does have a high overhead cost for processes. But in terms of threading, I think it might be better than Linux. In the web development that I do, Java servlets, this can make quite a difference. It might be a good idea to make the Linux kernal's mutithreading better. Maybe modeling it on the BeOS threading model
  • Excellent point. For some reason, most businesses insist on having the newest and most expensive hardware available. Linux was designed to not need it.

    The media hype surrounding the fact that Linux is free has caused many to ascribe features to it that it really does not have, and won't for a long time. Linux was written by a college student for his personal computer, essentially (yeah, I'm probably oversimplifying it). The fact that it has been adapted as well as it has to high-end systems is a testament to its fundamentally sound nature, as well as the superiority of the open-source development model.

    What I want to see is one of these benchmark tests against one of the BSDs. I have a slight hunch that a FreeBSD box could kick the tar out of NT.

  • I'd like to see one of these benchmarks run for a MONTH. 10x penalty for time taken to reboot servers. We'd see how badly the memory leaks in the OS would degrade performance over time.
  • Remember, this is Solaris 7, running on Intel hardware. We're not talking about the killer Sun Enterprise machines here, with 64 CPUs and a coupl'a terrabytes RAM. Solaris does actually run on Intel hardware, and interestingly enough, is less expensive than either NetWare or NT. In the process, it comes out smelling pretty good, even in a ZDNet benchmark.
  • > Why is it some great crime to prove that Linux blows goats when scaled up?

    Only that in all these benchmarks, the testers are afraid to think differently. They think the world revolves around Quad Intel boxes. Do you carry a tool box with nothing in it but a hammer? True, a screwdriver stinks at driving nails, but have you ever tried getting a screw out with a hammer?

    The point is, perhaps all these benchmarks are going at the problem all wrong. Perhaps there are ways where Linux IS faster, and cheaper, and more reliable.

  • What the hell do you mean nothing to configure?

    I think you need to spend some time learning what sysV is all about... and some serious time learning the LINT options for your kernel tunable parameters. There is plenty you can tune in Solaris. Just like every other unix...
    ---
    Openstep/NeXTSTEP/Solaris/FreeBSD/Linux/ultrix/OSF /...
  • Not corporate enough? Weird assertation there... FreeBSD in many cases is a lot more appealing to corporations simply because its not GPL'd. I have seen many upper management balk at the GPL simply because it is so damn long and wordy and contains certain phrases (I wish I knew what they were) that make them visably cringe.
    ---
    Openstep/NeXTSTEP/Solaris/FreeBSD/Linux/ultrix/OSF /...
  • Good grief. I didn't know Linux was so hard to manage over a network, and that I have to cobble together perl scripts that write logfiles to a shared volume to monitor my pile of machines.

    I guess I should free up some space and get rid of all those SNMP agents I have running and scrap the NSS and PAM stuff that unifies configuration and lets the system participate transparently in things like NT domains.

    It's going to kill me to decomission those old Pentium Linux servers I've got running and replace them with NT boxes. They seemed to be running so nicely these six months since I last booted them.
  • by Lazy Jones ( 8403 ) on Monday May 10, 1999 @03:42PM (#1898908) Homepage Journal
    A fair way to pitch OSes, hardware and server software against each other would be some sort of "tuning competition" (i.e. the Formula 1 of computing, but not quite). With several disciplines such as static/dynamic web serving, file serving, scientific computing (with well-defined tasks) and several price ranges for the total system cost, it'd be interesting to see how things turn out. After all, the benchmarks performed by magazines and those paid by vendors who try to make the competition look bad can never be fair, because the interest in tuning a particular platform is never high enough (and sometimes deliberately low).

    Who has the guts to organize and/or sponsor such an event? Magazines would be welcome.

  • by Angst Badger ( 8636 ) on Monday May 10, 1999 @03:04PM (#1898909)
    I wouldn't say that this was a bad review, especially considering that ZD would have dismissed Linux out of hand scarcely a year ago. Linux is harder to configure than NT, tuning information is a lot harder to find, finding a patch to match your kernel revision is an unholy pain in the butt, and you do need more competent staff to administer it than you do with NT's point-and-drool interface. This is not news. To their credit, they did say -- essentially -- that the higher learning curve associated with Linux is repaid by greater power and flexibility. Considering what an big corporate lackey ZD is, that's no small admission.

  • Why, when people are interested in "enterprise" web servers where their only benchmark is total throughput, do they benchmark Apache? I know that's not what Apache is for, you know that's not what Apache is for, the Apache developers openly proclaim that's not what it's for.


    I know that, you know that, ZD probably knows that. But which web server is running over 50% of all websites? That's a standard if I ever saw one. Zeus needs to get out there and start actually marketing their product.
  • You mean it's not fair to test every OS on the same hardware? God forbid we don't RIG the tests to favor Linux. Besides, isn't "we're the cheapest" Microsoft's slogan? (based on their mystical TCO numbers).

    Free speech, not free beer.
  • I don't get this... What is it about NT that automatically turns otherwise perfectly qualified admins into idiots? Could it be that a fundamentally unstable operating system requires constant supervision to keep it from falling over?

    Microsoft's own numbers show that 15% of NT users experience a BSOD more than once a month.

  • > I've seen in many articles about Linux stating that there are no applications taking advantage of the Linux SMP.

    Piffle. Apache runs multiple processes, and as such, benefits tremendously from SMP. Anything multithreaded (like Squid) will also benefit. And even single-process single-threaded apps will see response increases when the OS itself is multithreaded -- which Linux barely is. Probably further along than Netware though ... until recently, their OS technically didn't even MULTITASK.
  • "I speak good English" is also common and perfectly correct. The English you speak is good in quality. Talk is merely a synonym for speak. Word choice is a little off, but it's quite correct, technically.

    I give you a C as a teacher ... at least you gave him a passing grade :)
  • by Pac ( 9516 )
    Forgeting Linux for a moment, how dare they rate Solaris worst than NT? How can they say straight-face that the cream of Unix is worst than NT?

    This alone is enough to dismiss the article as worthless.
  • >>sadly BSD's are not currently in the position to threat NT's dominance... The real sad thing is that MS is already a heavy BSD user with Link Exchange [linkexchange.com], and people like Yahoo [yahoo.com]
    depend on Free BSD [freebsd.org] to make their operations not only cost effective, but runable & stable!
  • by jetson123 ( 13128 ) on Monday May 10, 1999 @04:50PM (#1898922)
    ZD's test suffers from the same problems as many others.

    For example, the ability to serve lots of hits per second on static web pages from a single box has no relevance to real-life web sites. At 1000 hits per second, a single Linux machine can serve about as many hits per second in these benchmarks as the whole Microsoft web site receives. That seems more than enough, and it's clearly not where real web sites are hitting their limitations (Microsoft uses dozens of their machines for their web site). I think the reason why Microsoft like this kind of benchmark is because it's easy to tune the OS for, even if it has little impact on actual web operations.

    Also, the importance of SMP is overrated: the need for SMP on NT and some other systems arises often simply from licensing and system management issues; in many server applications, separate machines are preferable.

    The benchmarks also don't take into account cost/performance. ZD claims "NT excels in NetBench". But actually, it only does 50% better for a price of at least $800 more. For that amount of money, you can buy another Linux machine and double Linux performance.

    Most importantly, however, I think it is wrong to consider Linux, Solaris, and other UNIX systems to be "competitors". People can (and do) run mixed UNIX environments. For example, I might use Linux for all the web servers and an AIX machine for running a DB2 enterprise database that backs it all. Using Linux means there are lots of directions to grow in and lots of compatible commercial vendors to choose from.

    If I develop for NT, I'm stuck when NT runs out of steam on its measly 4 or 8 processor Intel boxes, or when it runs out of its 3G address space. With NT, there is nothing to upgrade to.

    Linux clearly isn't for everybody or everything. Only Microsoft seems to have the hubris of thinking that a single OS (theirs) can work well for everybody. Linux is part of a family of operating systems from different vendors that are interoperable and mostly compatible, and that only as a group cover most needs from embedded systems to mainframes. But within its own niche, R/D desktop applications, server farms, and small to medium servers, Linux is actually quite good.

  • Yeah! (pointless-disgruntled-ex-employee-rant follows, feel free to totally ignore it)... Another example of a high traffic website using Linux (Redhat to be exact) is Wells Fargo [wellsfargo.com]. With their online banking, loan apps and internal access (at least when I was there), I think it holds up pretty good... What is ironic ( to me) is that at one particular call center (I can't say where, but it is in the NW), the internal network is on NT and when I worked there, it crashed at least 1-2 times a day, sometimes more. The lowest bidder won when it came to deciding who and what the new network was set up on. I'm done now.
  • The ZDNet People laid out there criteria as if
    they were users of Windows. Ability to use SMB,
    "Ease" of Setup, "Ease" of Optimization, Application Support(who needs a word processor on a server?), etc. I give 'em credit for running a test that includes multiple OSes, but this test has the validity of say ... myself comparing spoken languages.

    English ... A
    I talk good English
    French ... B
    some of my friends have been to France
    Spanish ... B+
    I heard two people speak it once. and they seemed to understand each other.

    etc etc etc etc

  • I was looking forward to (finally) seeing FreeBSD pit against the "whole gang": Linux, Solaris, NT. Like Linux and Solaris, the userland applications would be mostly the same: Samba and Apache. That leaves just the kernel for comparison.

  • by noom ( 22944 ) on Monday May 10, 1999 @04:21PM (#1898940)
    Why is it that EVERY time a benchmark comes out which claims that Linux doesn't perform as well as other OSes, people claim that it was because of biased testing? Perhaps people had justification for bashing Mindcraft's tests, but this evaluation seems to have been done very well.

    Even Linus says that, thus far, Linux had been developed with stability and maintainance in mind, not necessarily raw performance. Also, for the most part, linux developers haven't had the resources to spend on enterprise class servers for use in performance testing and coding. This is probably why Linux always seems to be the best performer on relatively inexpensive machines -- it has been developed and tuned almost exclusively on them.

    I think that most people agree that Linux has a long way to go before it will be the best (performance-wise). The fact that is is GPLed will certainly help, but we need people (companies) with the resources to spend on developing Linux with a goal of performance. It will probably take some time before linux coders stop playing catch-up (i.e. trying to support all the devices and functionality of other operating systems) and start working hard on optimizations.

    Frankly, I'm not even sure that a "bazaar" model of development can support this goal. In many cases, when you are writing code (esp. systems code) with a goal of squeezing the best possibly performance out of it, some of the most effective optimizations are nearly incomprehensible to people who haven't spent months examining all of the subtle interactions which make the optimization so effective. Since I doubt that Linus want's a kernel filled with magic that only a few wizards understand, such optimizations may never make it into the kernel (unless the kernel forks). These are the things which turn into debugging nightmares later on. I'll bet that both the reason for the speed of NT compared to Linux as well as its notorious instability are because of this.

    Incidentally, no flames please. I've been running Linux exclusively on my machine for a couple years now; that means none of that "well, I still boot Windows occasionally to run games" crap either. I just think that we should takes examine these published benchmarks for valid points and see what we can to do improve our scores. This doesn't necessarily mean benchmark specific tuning (which is what most companies do) either. Its only that just screaming "FUD!!!" doesn't accomplish anything. Hopefully, in a couple years, Linux will be so ripped that it will be difficult for someone to de-tune Linux to make other OSes appear better.

    -nooM
  • by flatrbbt ( 25980 ) on Monday May 10, 1999 @08:11PM (#1898943)
    what did you expect? that linux would win?
    remember the phrase first they ignore you, then they laugh at you. then they fight you. then you win.
    well... this is the fight part. and it is war...
    dont expect it to end anytime soon.
  • by bragi ( 26771 ) on Monday May 10, 1999 @09:42PM (#1898944) Homepage
    > If Novell supplied NetWare with a real SMP
    > kernel, NetWare's performance would be
    > show-stopping.

    Netware 4.x and older really had a problem with SMP, especially if Maximum Service Processes was set too low, but Netware 5.x is a different kettle of fish. It's SMP is very damn good.

    > Unfortunately, in its current state, NetWare
    > leaves a lot to be desired not only in
    > scalability but also in application support.

    Netware 5.x doesn't have many applications ported to it, unless you count such small things as Oracle and Notes.

    > Couple this with Novell's decision to divorce
    > great applications such as ZENworks and Novell
    > Directory Services from NetWare, and the value
    > proposition for NetWare becomes even murkier

    I'm sorry, NDS isn't part of Netware 5.x ? or even 4.x?????? Did these people even install this product? ZEN is bundled with Netware 5.x [admitedly without the Helpdesk or Remote Control functionality] as well. And does a damn fine job. Heck, it's even bundled in the latest Win32 client d/ls.

    Only thing I'm dissapointed with in Netware 5.x is the fact that we still don't have a decent Open Source client. Hell, even a closed source client would tide me over.

    This is not to abuse the excellent work of the ppl behind such wonders as NCPFS and MARS-NWE, or even Caldera for their client, but we really do need a proper NDS PAM plugin, and KDE/GNOME integration would be good ;-)

    Netware -> Excellent choice if your too chicken for unix, and haven't seen the light of Open Source. Y2K compliant, has been for over a year.

    Unix -> Power. Flexibility. Scalability. UNIX is your friend. Naturally Y2K compliant.

    NT -> Lack of stability. Lack of Y2K compliance. Lack of Power. Lack of decent command line driven programs. Pretty though. "Polly wanna Cracker?". Excuse the pun.
  • by skajohan ( 29019 ) on Monday May 10, 1999 @03:03PM (#1898946)
    How come I did not read the word 'stability' even once in the article?
  • I'm tired, but in the linux review linked to from this article something along the lines of "Comercial software vendors would be wise to standerdise on a particular distrabution."

    This seems like quite a bad idea to me. It would fragment OS further if, for example, your enterprise word processing app only ran on Caldera but the remote administration package you needed was RedHat only.
  • Test server:
    4 Pentium II/III CPUs
    4 Intel NICs
    RAID l5
    2G RAM
    Apache/Samba/no kernel tweaks

    Why is it that all companies insist on picking hardware on which Linux performs the poorest? It seems our friends at ZD have been chatting with our friends at Mindcraft methinks (or perhaps M$ themselves).

    Despite the fact that we seem to compare more favorably in this study than we did in the Mindcraft study, there is an extrememly important lesson we need to take away from this "losing in the benchmarks" experience as of late: we need to take these deficiencies and turn them into future strengths.

    It was put very well by Linus himself in a previous poster's message. To paraphrase, anything that can be interpreted as a weakness in Linux by a media or testing agency must be improved. These are worthy pusuits, and if we keep doing them at the rate they are discovered, (unlike our M$ friends), we will eventually surpass all other OSes in every respect that matters.

    We should probably place a particular emphasis on improving our SMP code, because that's the area we probably have the most to gain. All those other driver optimizations will only help us if by some luck the testing agency picks the same ones.

    Anyways, I hope everyone won't get discouraged over this recent benchmarking FUD. The acceptance of good things is not always an easy road.
  • I say, PHBs most likely won't even read articles like this, but skip directly to the "executive summary" or whatever, where they can see which OS was "the best".

    Doesn't this harm RedHat's business? Since they have a lot of cash right now, why not paying for independent benchmark tests? They could do all sorts of interesting tests, like Linux vs. NT on single processor machines, or the "best solution for a fixed amount of cash" test.

    I even know about some lab that's particularly skilled at making your products look good in tests. Mindcraft-something, was the name, I think. The hats should give them a call.

  • by evbergen ( 31483 ) on Monday May 10, 1999 @05:53PM (#1898954) Homepage
    Erm... may I point out that Apache is, although not
    multithreaded in the sense of multiple threads in
    one process space, a multiprocess webserver?
    That means any SMP is taken full advantage of!
    I often find those buzzword-deep remarks about multithreading
    rather annoying, as the only reason IMHO it's hyped so much
    these days is because NT is so bad at IPC and creating processes.
    Also, what buggers me is the silly idea in the article
    that Apache would need to fork() for each request. This is nonsense, as
    you can configure as much pre-spawned servers as you want! So the reason why they suppose why apache whould perform worse than a multithreaded server escapes me. As to CGI, forget it, that _does_ need a fork(),execve() for each request. Rather, go fastcgi...! This way, application services can be prespawned too and reused between requests. Just my Hfl. 0.05...
  • I have a few other nits as well: the high score for ease of configurability of Solaris? There's essentially nothing you can configure in Solaris 7.

    What are you, nuts? If anything there is too much you can configure in Solaris. For starters:

    A collection of tuning papers and resources [sun.com]

  • "But today, with Ingo Molnar's Software-RAID patches, Linux outperforms any hardware-RAID solution for a fraction of the cost."

    Wait a minute. If you mean cost-performance, that is one thing, but sheer power outperformance? You mean your software RAID patches are going to outperform my Compaq SmartRAID with 16 Mb of battery-backed cache directly on the UltraSCSI bus?

    No way. No how. Hardware RAID allows for some nice cache tricks that can increase speed and reliability.

    Software RAID has potential for more flexability and certainly a better cost/performance ratio.

  • by gsaraber ( 46165 ) on Monday May 10, 1999 @03:27PM (#1898961)
    And again they are using 4 intel ethernet adapters, probably configured to do some sort of striping/loadsharing/whatever .. i think that's what gives NT it's edge in every benchmark..
    I think there's going to be a lot more like this .. watch for the 4 ethernets :(

    It shouldn't be to hard to implement into linux, bind 1 ip to 4 ethernets and send through whichever one is free ..

The opposite of a correct statement is a false statement. But the opposite of a profound truth may well be another profound truth. -- Niels Bohr

Working...