Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

D.H. Brown Associates Attacks Linux 166

Scott Stevenson writes "A News.com article describes a study which dings Linux for poor SMP support, access to only 1 or 2GB of memory, and lack of clustering. Of course, it doesn't say anything about NT's uptime issues on a per-machine basis." Also says that hard data isn't available on Linux being reasonably crash proof.
This discussion has been archived. No new comments can be posted.

D.H. Brown Associates Attacks Linux

Comments Filter:
  • by Anonymous Coward
    All right, time to start discrediting the article; I'll go first (maybe):

    So Linux can only address 2 gigabytes of memory? I haven't seen the source code for the memory management but I don't see any reason why Linux shouldn't be able to address 4 gigabytes of RAM, just like WinNT can, allegedly. As for the 4 gigabyte limit, that's NOT Linux's fault. 32-bit Intel processors can only directly address 32 bits (4 gigabytes) of RAM. Of course, they can address up to 64 terabytes of virtual memory.
    Therefore, who knows what memory wonders Linux could be capable of on other processors?
  • by Anonymous Coward
    Apple has never had a problem admitting that their platform is perfect for everything. Back when it was Apple vs. IBM, Apple openly advertised for people who could program an IBM System 38 (a minicomputer). When asked about it, they said that they need it for their backend accounting software. None of their PC's had that kind of capability.

    As for using Sun servers -- Apple has no truly high availibilty solutions which is crucial for a website getting as many hits as they do. Furthermore, they have what amounts to years of work behind their Sun solution -- it isn't trivial to ditch an existing solution and start using a new one. And what web server are they going to use for it??? Apple may have a strong base OS, but they don't have a corresponding web server. Apache for MacOS X hasn't had the time tests that is necessary before deployment.

    Remember when Microsoft jumped from using BSD for their web servers to NT overnight? It was a nightmare for them because of the amount of bad publicity while they worked out the details of the system. (Tuning, handling a distributed environment with all new scripts, etc.) I'd fear for Apple if they pulled a lame move like that...

  • 1TB regular ram + 1TB io ram. 64 processors, as well, but I'm not sure how efficiently.
  • I'd assume the testing and hard data would be included in the report you have to pay for. This is just a summary, so of course there isn't going to be lots of benchmarking in it.
  • Posted by TRF:

    Correct me if I'm wrong, but isn't it true that Windows NT is limited to 2GB of RAM and 2GB of virtual memory. This was a question I used to ask during technical interviews for NT administrators.

    Todd
    Every 45 seconds, another arrest for Linux. 695000 last year. It's time for a change.
  • Posted by Tony Smolar:

    All this means is that Linux is not ready to step up and take on the work of big-iron IBM/HP/Sun machines. Neither is NT (but people attempt it anyway) Even if Linux was technically able to take on the big iron, there is no service organization in place to back it up. 1. Who out there is offering Linux Downtime(Uptime) guarantees? (The system won't be down more than x hours/year) A: Nobody 2. Can you contractually get a Linux service technican on your site within an hour of having problems if you need to? 24 hours/day 7 days a week. A. No. You may find individual consultants willing to take this on, but what if they're not available when needed? This doesn't mean that Linux has no place in the enterprise. There's alot it can do that doesn't require the level of technology and service needed for "big iron". BTW, where they testing 2.0 or 2.2 SMP?
  • This is a classic example.

    They argue that Linux isn't suitable as a replacement for massively scalable systems.

    But few, if any, are claiming that it is.

    Linux is primarily catching on as a workgroup, or light-to-medium duty web server, and is very well suited for such a role; better suited, in fact, than Windows NT.

    By changing the subject, and supplying a convenient straw man to knock down, D.H. Brown avoids having to confront Linux's strengths.

    If people decide they have a need for a massively-scalable Linux, it will come to pass. For the time being, to argue against Linux on this basis is intellectually dishonest at best, and out-and-out FUD at worst.

  • Linux lacks a journaling file system, which means that if fsck can't fix a disk after a power failure or system crash, you're pretty well hosed and must restore from the most recent backups. That's not good for availability. You want a log of reads and writes to disk so that a consistent image can be rebuilt after a crash. Database vendors realized this decades ago, and high-end Unices followed suit with their filesystems.

    Linux does SMP approximately as well as Windows NT, as of the 2.2 kernels. That's nothing to write home about, compared to Irix or Solaris.

    Linux distributions do not yet conform to a base standard filesystem or level of library functionality (LSB standards). That's bad for interoperability. One hopes it will be fixed.

    All of these can be seen as weaknesses. Or, if you are so inclined, they can be seen as goals.
    About the only thing which the above are NOT is evidence of Linux being mature. Note that the BSDs are no different (AFAIK) in this respect, save for the standards (only one distro of each).

    Whether DH Brown was being misled or just didn't dig deep enough, they should not have said that NT is a contender. But the fact is that Linux isn't either, if you want high-end scalability *today*.
  • by sterwill ( 972 )
    SMP - I forgot where I read it - maybe on the FreeBSD site - but FreeBSD outperforms linux SMP by 17% - so obviously there's room for improvement, no?
    That seems backwards, at least on Intel machines. Could you provide a reference for this? Last I checked up on the latest FreeBSD docs, SMP was listed as needing much improvement.
  • The reported limit of 2GB for Linux is absolutely correct, as is the lack of a journaling filesystem.

    The SMP issues are a lot more hazy than that. It is true that there are few benchmarks at the moment, but nearly everyone who actually tries it has found Linux to be better than NT.

    As for stability, Linux wins hands down (benchmarks or not). All you need to do to prove that is run one of each system, and see how many times you reboot. Personally, I've found Linux to be more stable than NT or Solaris (In my last job, we replaced solaris with Linux-Sparc and cut reboots to 1/3).

    Fortunatly, the shortcomings are being addressed. The ext3 filesystem (already under development) will have support for journaling and ACLs. The memory issue is under discussion now, to be tackled in 2.3. Several HA projects are underway. In reality, Linux supports it now, it just doesn't have a hand holding setup script for it.

  • I find it extremely odd that this DH Brown article points out several "facts" without explaining a few basic things.

    1) Linux doesn't do SMP well.
    What version of Linux are they testing? 2.0? 2.2? When it comes to SMP, there's a VAST difference.

    2) Linux only supports up to 2gb of RAM.
    In its current form, yes. Linux only supports 2gb of RAM. However, I'm willing to bet my left maple nut that 4gb is on the TODO list for 2.3.

    Beyond that, the article was mostly rhetoric and jargon designed to confuse and mislead people.
  • All of that is very true. However, that isn't going to help matters when that $1000 report is delivered to the executive VP at my work.

    That's why I think it's irresponsible for people like D. H. Brown to make statements about Linux vs. NT's reliability strictly based on feature lists. The conclusion of the report is "Linux is less reliable than NT", and that's all most suits are going to see. I'm sure I'm going to see this "study" quoted as evidence that a single NT box is more reliable than a single Linux box, something that is patently absurd.

    I'm not even convinced that a HA cluster of NT boxes is going to be more reliable than a single Linux box; I've seen too many examples of HA-enabled systems on NT (such as domain control and WINS name services) having lower reliability due to global failures. (For example, a corrupt WINS database becomes no less corrupt when it's replicated.)

    I would much rather have them talk about real high-end Unix's HA features as well as its high stability, NT's HA features and its low stability, and Linux's reputed high stability (anecdotal evidence is evidence, even if it's not that strong) and its lack of HA features. That, at least, would be honest, and would give people like me a leg to stand on when fighting the suits on NT deployment.
  • Linus has stated that 2.2.x is limited to 2 Gig's memory. I'm not certain if he was refering to the ix86 or 2.2 in general. There is a 3 gig kernel patch for 2.0.x, but for a number of reasons the 3 gig hack doesn't work under 2.2 . There's fairly long thread on it on the kernel mailing list.

    In my experince 2 gig under NT is a waste of money. This is especially true when you're running a file server. It may be able to address it, but it sure as hell doesn't do usefull caching with it.
  • Here are a couple of mailboxes to slashdot:
    At CNET [mailto] Or At DHBA [mailto], take you pick.

    --

  • On the reliability side of things regarding NT
    (sorry, this is a BIG bee in my bonnet)

    We have a Tolken Ringpiece network with OS pooh Lan Server, soon to be replaced by NT.

    At our site where the NT "upgrade" to the networ
    has been implemented, nothing works.
    No one can log on to the network.
    Never mind, Nt is the "next big thing"

    My point? Don't spend all day in the pub and go on slashdot.....
    but seriously, what planet are these people on,
    touting NT's "reliability"?
    The ATM at one of the bank branches here runs NT and it crashed a couple of weekends ago. I took
    pictures :) It was dead funny. I'll get them scanned for you all to see.
  • reading comprehension: THERE IS NO COMPARISON.
    The *only* comparison they make is the amount of memory Linux and NT can use. Every other point in the article is "linux sucks", but no specific superior alternative is named.

  • In 2.0.37pre up to approximately 3.5 gigs
    is supported (and I believe work is being done to support the kludgy stuff that lets xeons
    do up to 64G in a sane as possible way)

    Oh, and I believe Linux has been succesfully
    booted on a 64-cpu ultrasparc. That's certainly more than NT will do for a very long time
    I know, using them efficiently is a very
    different matter, but what I do know is people using multiple 6xppro linux systems happily in heavy production use. It might not be optimal
    with > 4 cpu's (yet), but well, NT isn't
    either :)
  • No. We can accept criticism. But the people who are criticizing need to know what they're talking about.

    Though it's only a summary, with the associated lack of hard data, there are several holes and errors in there.

    They're addressing issues that either are already being worked upon, or are issues of the hardware platform being used, and not the software.

    Shoehorn Solaris onto SMP x86 and we'll see how well IT handles high end computing tasks.

    They're comparing several OS'es on myriad, completely different hardware platforms. That's apples and oranges the last time I checked.


    Chas - The one, the only.
    THANK GOD!!!

  • For one, it takes a LOT of money for a small company to "got out and test SMP systems themselves." Second, you guys would bitch if you didn't like the results, basically because it appears any criticism of Linux is heresy. Benchmarks like TPC-C and SPECweb96 are established industry benchmarks. They are developed by a consortium of vendors over a period of YEARS. This takes a lot of money and a lot of work.
    You want benchmarks? Then why don't you whining babies download some benchmarks toolkits (I believe you can get your hands on the code for SPECweb96, etc.) and have someone maintain a credible depository for these results. Once you guys can demonstrate real world performance, it will be easier to match up against the bigger guns. Then you can brag a little more.
    I am sure Tony Iams' panel discussion with Linus is going to be great. Tony is incredibly knowledgeable when it comes to operating systems (believe me, he tests them all extensively over and over again right in his office). I think they still have an AS/400 vs. NT report where he SLAMMED NT's capabilities. Funny thing is, M$ takes his analysis VERY seriously. I know this report was NOT an intentional ding on Linux. I know some of you have a hard time with real criticism, but that's the truth. I'd take it as more of a challenge to improve upon what Linux is now, and use it as a roadmap.

  • Linux is not the best OS in the world - YET. Maybe we should work more?
  • Default x86 Linux has 960MB physical memory support and 3GB user address space. A patch exists to up the phys memory support to 2GB, but that reduces the user address space to 2GB.

    The reason for this is that all physical memory is constantly mapped by the kernel and therefore can instantly be accessed without modifying the page table. Mapping it only on demand would allow more physical memory, but the consent on the linux-kernel mailing list was that this would suck performance-wise.

    It may be possible in the future to use additional (unmapped) phys memory as a ram disk (which can be used as high priority swap).

    All this is not an issue for 64-bit platforms as the virtual address space for these is really vast. Although at above 4GB the problem arises that PCI cards without 64-bit addressing won't be able to DMA into/from memory above 4G. The solution is to use memory from below 4G (just as with ISA DMA from below 16M), but there is no kernel support for that yet.
  • Comparing on Intel hardware? The other commercial Unixen in the article claim to have support for up to 128GB memory. Definitely not on Intel, I'd say.
  • They won't mangle the VMM, but Linus agreed to incorporate future patches to use unmapped memory as ram disk, which would not create an ugly, unmaintainable mess in the source code.
  • Hmmm... I thought that at some point Intel made it possible to do 36 bit memory addressing on the i686 processors. This would allow access to 8 GB of memory. Of course I could totaly be off my rocker.

    Andrew
    --
    ...Linux!
  • Yeah, we've got a bunch of workstations with 4 gig. They're used for big analysis projects.
  • Anyone who's done a significant amount of Win32 programming knows about Windows' "4GB limitation". It seems that our dear researchers have, yet again, listened to marketroids as opposed to doing Gasp! "actual research".

    The Win32 memory architecture has a 4 gigabyte "Address Space". Which, would seem to hint that an application can use 4 gigabytes of memory. Well, if could, except for the upper 2 gigabytes are "reserved for system use" and the lower (I think) 4 megabytes are typically not used, as applications are typically loaded at the 4 megabyte mark in their own personal "address space". The entry address is passed to WinMain() (The Windows application entry point (as opposed to main() on Unix)) as the "Application Instance". It's normally 0x04000000, or four megabytes.

    But no, as an application, you can't access more than two gigabytes. That's all that's available in user space. Windows memory management does some significant voodoo in the background, but if the system has a true four gigabyte virtual playground, why can't an arbitrary application see it?

    Funny how they never said why Linux can only access 1GB "under certain conditions". Maybe when you only have one gig installed with no swap? :) Seriously, though. What about Linux on an Alpha or UltraSPARC or MIPS R4000 or higher? Still a 2 gig limit? I don't think so.

    I'm really beginning to hate the media and the web in general. Too many clueless bastards out there know how to type.


    The following sentence is true.
    The previous sentence is false.
  • On the 32 bit processors that I experiance with linux on (ie intel and ppc), the 32 bit address space is divided in two, with 2 gigs for physical RAM, and 2 gigs for memory mapped hardware.

    I would assume that 64 bit processors have a similar arrangement... ie 2^63 for RAM and 2^63 for hardware addresses.


  • Good day,

    I was at the BEA users conference in February, and at the advanced topics in TUXEDO, the PM for that product announced that they were releasing TUXEDO for Linux sometime this summer. In fact, they are considering going one step further. They are considering giving away a single user SDK license along with allowing you to download TUXEDO off of their website. This will let developers work on TUXEDO on their home machines, altho load testing on such a setup it out.

    They started looking at this after buying Tengah and seeing that allowing the DL of WebLogic generated large amounts of interest. I for one was quite excited to hear the news, I can't wait to grab it!
  • Every OS has problems and limitations. Linux is not an exception to this rule. Although the information is a bit vague, developers should either buy the full report or use what information they have to improve the OS. Linux is a great OS and there is no reason for this report to be avoided because of pride.
  • However, D.H. Brown didn't assess either the operating system's cost or its stability, Iams noted. Though there is plenty of anecdotal evidence that Linux is fairly crashproof, hard data on the subject is missing, he said.

    Well, gee, if the purpose of this study wasn't to gather hard data, what was it? For a thousand bucks, I'd expect a little actual research. My box has been up for 157 days, primarily because I am too lazy to install the new kernel. I wonder if any NT box has ever reached that.

  • I think the idea of having a single file system span multiple disks in the way LVM does is flawed in principle. The LVM implementation itself also has some serious limitations (you can't shrink them, it gets difficult to predict their performance, it gets difficult to figure out which physical partitions and disks a file system actually depends on, etc.).

    I think there are better approaches to the same problem:

    • provide better quota systems to achieve similar hard resource limits that separate volumes give you, but on one file system
    • do something at the file system level rather than the volume level that lets files get stored on different disks/ partitions transparently; a kind of "concatenated mount"
    • use a RAID architecture that lets you add new storage dynamically
    • use a distributed storage architecture

    I like Sun's approach to LVM. In AIX, I have to deal with LVM whether I want to or not, and it really makes disk and system management significantly more complex.

    BTW, the CMU group that did AFS has come out with CODA, a successor that also offers some neat new features like disconnected operation. It's worth looking into. They are concentrating on Linux and NT clients and servers.

  • These supposed advantages don't hold up under close scrutiny:
    • JFS only makes transactional guarantees for the file system structure, not for the actual data.
    • JFS can't protect you against bad blocks, administrative mistakes (quite likely because of the really messy LVM on AIX), and other failures. So, you still need backups.
    • The overhead of JFS is much larger than "7-14%" in my experience. I have had cases where extraction of a tar archive with lots of little files took five times as long on a fast AIX machine (high performance SCSI disk) than on a low-end PC with Linux (IDE drive).

    The risk of data loss with JFS is not hypothetical: despite all the journalling, I have lost data on JFS volumes and whole JFS volumes even without hardware failures or sysadmin mistakes. On the whole, JFS doesn't look any more reliable to me than ext2, but it sure is a lot slower. Why do you want to pay a big overhead for each file system operation if you can simply run a simple, efficient fsck at boot time on the very rare occasion that the system wasn't shut down cleanly?

    This is a particularly relevant question for AIX systems, because they are very stable. Left alone, AIX servers will run for months and years doing whatever they are doing. When they do go down, it's for hardware or software upgrades, which require some extended downtime anyway. Making the file system slow in order to save on an fsck under those circumstances is a bad tradeoff. And AIX machines boot horribly slowly anyway because of the way their SCSI subsystem is implemented: an AIX desktop workstation takes 8 minutes to boot, and large servers can take literally hours.

    Data warehousing applications usually use databases anyway, and those go to the raw disk for best performance (DB2 on AIX does). Those databases do their on transactioned updates.

    Thinking that you can make individual nodes robust by twiddling with the file system is outdated mainframe thinking. The only robust, reliable way to safeguard your data is to use a distributed, redundant storage architecture. That way, you are protected against hardware and software failures. And you can concentrate on making the individual nodes fast and simple.

  • I hate to say it, but your so full of it. NT with a year uptime? Every NT server I have ever ran that did anything had to be rebooted once a week, min. If it's unpatched, I know you don't have it on an open network.
    You can say NT is threaded, reentrant, modular, and has all the cool feature you want, but when it comes down to it, it is not stable. And that is all that matters. Feature are one thing that sells to business people, however, it is stablity that makes a system great.
  • I hope that in Linus' upcoming debate against the representatives of DH Brown and Microsoft (isn't that happening in conjunction with Spring Comdex?), that he can set them straight on a few things.

    From what I've heard from a friend of mine who as a dual CPU machine, Linux 2.0.x kernels was on par with NT 4.0 for scalability. With the 2.2.x kernels, Linux scales noticeably better. I don't know anyone who personally has a quad processor Linux box, but I've seen personally that NT 4.0 doesn't seem to get big improvements in performance going between dual and quad processors and the anecdotal evidence I've seen is that Linux 2.0.x also scales better on four processors than NT 4.0 and that Linux 2.2.x is an even bigger improvement over 2.0.x for four processors than for two.

    DH Brown complains that there aren't a lot of published comparative benchmarks for Linux. But they are supposed to be a research organization. Why didn't they test it themselves? Why does news.com echo such uncredible criticism without question?

    Instead they seem to increase the confusion by mixing the comparison between high-end *nixes that at this point are with little doubt more scalable than Linux (although Linux is gaining), with some of the feature comparisons (memory address size and journaled file system) of NT to imply that NT is also more scalable than Linux.

    As many people have probably already noted, a journaling file system is already under development for Linux (I believe it is being written by Stephen Tweedie). From my experience with NT, I'd have to say that its 'journaling' file system is certainly not on par with AIX's JFS or Veritas on Solaris either. I've lost data on NTFS due to corruption. I've never lost data under ext2 except when I've had the whole hard drive fail. I've also seen NT take as long or longer doing 'file system checks' which theoretically shouldn't be necessary under a journaled file system. I certainly have never seen that happen with AIX's JFS or Veritas on Solaris.

    As for the memory limitations, they just aren't a big deal for most applications. Linux is certainly on par with NT Server 4.0 here, and in all reality with a kernel recompile is on par with NT 4.0 Enterprise Edition. Frankly more than 2G of RAM (or 3G with a recompile) is about all that is realistically possible on x86 hardware. NT can't even take real advantage of 64-bit hardware yet, which is an area where Linux beats it hands down. Sure, the commercial *nixes beat Linux, but even NT 4.0 Enterprise Edition is an order of magnitude more expensive than even the most expensive Linux distribution without even comparing client licenses.

    I wish I had an SMP machine so I could run and/or write the benchmarks myself.

    Would I like to see what few limitations are currently in Linux lifted? Sure. Do they really matter to most people? No. Do I believe that the limitations in Linux will be lifted before the limitations in NT are fixed? Yes.

    The tone of the news.com article is unecessarily negative when it doesn't need to be. An article which concludes that Linux nearly matches costly solutions for a fraction of the price could easily have been written from the same (few) data points that were presented in the article.

  • You want benchmarks? Then why don't you whining babies download some benchmarks toolkits (I believe you can get your hands on the code for SPECweb96, etc.) and have someone maintain a credible depository for these results.

    Actually, I believe you have to pay to be a member of TPC or SPEC to get the benchmarks. In order for people to take your published results seriously you certainly have to pay a big-6 type accounting/auditing firm to validate them. And in order for them to be taken seriously you also need to have the money to buy high end hardware, or be high enough profile to get a vendor to loan you the hardware to do the testing.

    And it isn't Linux enthusiasts that are complaining about lack of benchmarks, it was DH Brown if cnet.com can be believed. You say that it takes a "LOT" of money for a small company to do the benchmarks, and you call us whiners because we as individuals can't come up with that kind of money? DH Brown is in a lot better position than we are to pay for numbers.

    Since we haven't seen how DH Brown's report is worded other than what we've seen reported on in other sources (because I for one can't shell out $1000 for such a report), it may be unfair to be critical of DH Brown, but you can certainly point at news.com for purposely biased coverage of said report. If DH Brown has the integrity you seem to believe they have, then they should be a little miffed if news.com is misrepresenting their findings if it wasn't their intent to ding Linux.

  • Though, if you look at the number of BSD boxes compared to the number of Linux boxes, it seems like a strong showing for Linux.

    With that many boxes, there are probably plenty that aren't configured very well. Where there are much fewer BSD boxes, and there's probably a good chance that the sys admins know how to set up there software better.

    Well, either way, Open Source is beating the pants off everyone else. ;-)

    P.S. I like your nickname. DragonBallZ reference right?
  • Yikes. FreeBSD has more than twice the average uptime of its nearest competitor, Linux (117 vs. 50 days).

    Linux, though, has the record for longest uptime ever (730 days, 14:16 minutes and counting)
    So much for commercial Unices being better (though no one has a BSDI box).
  • I fully agree with you. Some people were bringung up some ha-howtos as proof for linux high availability, but you can't expect this study to rely on howtos found on the web. The only serious sources of information they will respect are (big) companies. But I expect ibm, compaq, hp not to give out numbers which proof enterprise-readiness of linux (if it exists, I don't know).
    This would render their own os's needless. We need companies like VAR, Redhead or Suse to give out _hard_ numbers and produce some missing tools.
    Who would like to put SAP's R3 on a system which is not respected as high end (yes it happened - NT - I know).
    By the way, are there any usable (non zdnet) benchmarks for linux+apache(modphp+modperl) versus
    nt+iis+asp+vb?
  • You're absolutlely correct, but the scope of this report wasn't to test themselves.
    This shows one weakness of linux in this field. For instance go to spec.org and do a search [spec.org] for operating systems.
    Results:
    linux: 0
    windows: 257
    out of 2314 records.
  • >Under heavier loads, NT will crash :)
    Thats another point, _will_ it crash, are there hard facts?
    It should be simple, combine sql and iis for dynamic web-pages and fire up a hell of clients till it smokes. Why haven't I seen someone doing this tests.
    Is it true that under heavy loads the NT-"console" will become slow as hell and nearly freeze? Someone pointed out that this has something to do with the kernel scheduling...

    Back to the numbers, I actually have seen some, but they wouldn't be considered as hard facts from everyone due to their origin: http://perl.apache.org/bench.txt [apache.org].
  • Regular run-of-the-mill Window NT can address up to 2 GB of memory per process (I'm talking virtual memory here), the other 2 gig of memory (on Intel boxes) is reserved for the OS. The Intel x86 line can address up to 4GB of physical RAM.

    Now, Windows NT "Enterprise Edition" can address up to 3 GB of memory per process with 1GB reserved for the OS on Intel platforms. On Alpha platforms use can use what Microsoft calls Very Large Memory or VLM to get pointers into memory aobe the 4GB mark. Such memory is not paged memory and you must actually have more than 4GB of PHYSICAL RAM to use VLM.

    Hope this helps.
  • if some microserf at nowhere.com says something stupid, why generate a lot of slashdot traffic to that site?

    if someone *has* to know exactly what the microserv said on Bill Gates' old partner's web site, a little work can find it.

    ergo, he's insightful not sleepy
  • "Missing from Linux are high-availability features that would let one Linux server step in and take over if another failed; full-fledged support for computers with multiple processors; and a "journaling" file system that is necessary to quickly reboot a crashed machine without having to laboriously reconstruct the computer's system files."

    Um, from my understanding of NT5/Windows 2000 they're getting rid of the PDC and BDC -- i'm not 100% sure on that, but what i seem to be hearing.

    Anybody got any comments?

  • Correction. It is not a summary, but the "Conclusions" chapter from the real report. I should be completely out of my mind to pay $995 for the other chapters that led to this FUD.


    BTW, how many CPU's are in Dave Millers Ulrapenguin box [linux.org.uk]?


    If they want to compare Linux' scalabilaty with Solaris, then that's fine with me, because I don't see Linux running on a 10000 yet, but to say that NT has a better scalabilaty is laughable at least.

  • actually, they mention "conventional unixes" as a superior alternative, at least in terms of some of the stuff they mentioned.
  • A year ago most of the world didn't even know Linux existed. Now the pundits are bending over backwards to show that you can still get a better solution if you've got a sufficiently large pile of cash. BFD. What are they going to say next year? We're winning, folks.
  • Then you probably haven't installed much software on any of them since they've been up, as most of the apps I've seen for windows require you to reboot after installation.
  • From what I know, Linux does make the most of the Intel hardware.

    Anyone running the kind of application that requires that kind of memory is going to be running non-intel hardware anyway, like you are.
  • Those have got to be some pretty hard hitting servers. I know we've got some hard working Sun boxes at work be even they don't have that much memory.

    You've got to admit tho that even for active servers on Intel hardware, 2+GB is rare.
  • Is this something that a lot of people need to be concerned with? I'm no expert but somehow I get the feeling that something like 99.9999999% of Linux, NT, Warp, Solaris, and other users are running far less than 2GB of memory anyway.

    This is kind of like the old Beowolf arguments. I just don't see a lot of people doing that so why use it as a comparason of operating systems.
  • There is a long road ahead for Linux. To walk this road it needs all the [constructive] critisism there is. Would you rather see only rosy "Linux is just a wonder!" articles everywhere? Besides, many of the things mentioned in a News.com article things are true.

    Flaming up whenever someone says Linux is not perfect does not do much good. Looking at the ways to change such oppinions is a much better approach.

    It would be nice, however, to look at the full report.

  • After all, it can't leave the "realm of toys" (i.e. the intel platform) without ceasing to be intel-based. What exactly is wrong with refusing to compromise with horrific kludges? After all, who can you think of that can afford systems with greater than two gigabytes of memory, but can't afford better base hardware?
  • Oh man. Who gave that post such a high rating??

    The author (D.H. sth) addressed commercial ready packages, not your self made solution (fake, mon, some shell scripts). From that perspective, Linux lacks support for such features.

    Also, the author did not compare Linux to NT, but to other operating systems, such as Solaris. I think it was Linus himself who said not long ago that Linux' SMP capabilities will be comparable to Solaris in 3 years or so. It currently scales to double (maybe quad) boxes for IO bound processes. But that is all.

    You lack fundamental knowledge in respect to the Alpha platform. If you would follow linux-kernel, you would know that only 1GB is currently supported on that specific platform. There are some people working on this problem.

    No, they don't say "Linux sucks", even if you would like to read that out of context. It is a comparison between high end operating systems, you should be proud that our Linux has entered that realm.
  • As someone already pointed out, a nice chunk of the article says incorrectly how much memory linux can address.
  • If you can afford 2GB+ worth of RAM, probably ECC, then you can afford Solaris.
  • After reading some of the many comments on the D.H.Brown report on Linux, I found that noone bothered to actually read the report, just the poorly-written news article describing it. Here is what the report says:

    1. Linux is great for: small file, print, and web servers; appliance-class systems; ISP's; computer nodes in Beowulf clusters.

    2. Kernel 2.0.36 has poor SMP abilities.

    3. Linux can only access *files* (not memory) up to 2 gigabites is size (Tru64 UNIX can access files up to 14TB in size)

    4. Kernel 2.2 SMP should handle 8 processors, but there is no field evidence showing that Linux programs can properly handle multiple processors.

    5. There is no redundant high availability (HA) clustering for Linux (even NT offers HA clusters with Microsoft cluster service). Beowulf does not help here because it was not designed to be redundant.

    I could go on, but better that you read the executive summary posted for free at http://www.dhbrown.com, or buy the full report for $995.
  • I think the prove is here http://uptime.hexon.cx [hexon.cx]. It's an open project to monitor the uptime of computer systems. Clients are available for various unix-tastes, Windows, BeOS and more. Over 200 boxes are in the list with their current uptime. The list clearly shows which platforms have the highest uptime, and thus are most stable.
  • Duh... I don't think so.
    I don't know bout the 36 bit thing, but surely adding 4 bits should give 16x the addr. range, not 2x...

    Breace
  • I have a web hosting company, running linux, totalhits to our server is about 500,000 requests per day, and about 30,000 visitors. The server is a Intel Celeron 266, 196Mb ram, Lots of hd space. The server load avg is about 0.30. Which i don't think is very bad. Ker 2.2.0. And I'm pushing it even more, The new users range from about 500, 2000 new visitors per day. Click here [anvdesign.net] to check it out, Yes we have log files :)
  • by Anonymous Coward
    What everyone is posting about here is just the summary. The full report, much larger, and probably including some hard data, is available if you want to buy it.
  • Regarding fbsd's smp, according to [freebsd.org]
    http://www.freebsd.org/~fsmp/SMP/benches.html
    At this point in development of the SMP kernel we do poorly in many benchmarks. We have been concentrating on other issues
    such as stability and understanding of the low level hardware issues. As we start to work on areas that improve performance
    benchmarks are useful for gaugeing our progress.


    Though this may be out of date, there are notes in the 3.0 release readme that indicate that SMP is not yet done in freebsd. Comparing the two doesn't seem fair to either.

    -Peter
  • I'd like to mention the rumors that I've been hearing, though not to contradict the report.

    Ext3fs is supposed to be ext2fs w/ the option to use a jounal, or to act as a traditional ext2fs.

    Sounds cool to me.

    -Peter

  • The 2gb is correct on the intel platform. Putting linux on an alpha or an ultrasparc changes the picture. I've been told that the other 2^31 bits of ram are reserved for virtual memory.

    Actually, is the mips r4k and r10k addressed as a 64 bit architecture? If so, then that also has through-the-roof amounts of addressable ram.

    Anyway, the point is that since linux is not shackled to one architecture the review is dead wrong on this point.

    -Peter

  • Actually, Apache does technically use thread pooling, but more precicely it is "process pooling". The difference being that each process is given its own address space, execution state and OS resources, whereas a thread/task just gives seperate execution state to each thread: they all share the same address space.

    On most OS', Linux included, creating many processes is understandably a lot more heavyweight (and inefficient) than creating lightweight threads. This isn't really that noticable under light/moderate loads, but when the server is under heavy load and threads are being spawned on a near per-request basis, you'll notice Apache's performance degrade significantly vs. Netscape Enterprise server, Zeus, or even Java web server (which uses user-level threading typically).

    Here's a good paper by Doug Schmidt on web server threading models.. it's about 2 years old so Apache is probably a LOT better now, but it explains the issues & shows benchmarks clearly:

    http://siesta.cs.wustl.edu/~schmidt/INFOCOM-97.p s.gz

  • I agree with your sentiment. While their treatment of NT in this study is erroneous in some ways, D.H. Brown has dissed NT significantly in the past, so I think that's why I'm not concentrating on that aspect of the study as much.
    Of course, in Linux vs. NT debates it certainly is disappointing to seem them cast as near-equals, which obviously isn't true. (though must debates don't always have to be about NT vs. Linux)

    The thing is, why haven't there been any real studies about Mean Time Between Failures of Linux vs. NT? Sure, it's a difficult subject to tackle in a controlled fashion, but enquiring minds want to know :)

    The evidence of NT's up/downtime is mostly anecdotal as well! [or it's just marketing spew]..

  • cool, glad to hear it.
  • a lot of systems that have extremely high uptimes (over 2 years) are internal business systems and probably wouldn't be on such a survey.

    for instance, some mainframe systems have had uptimes of 5+ years.
  • Read those comments again. 2 GB *is* the limitation of physical RAM.


  • the AS/400's we had over at a medium-sized financial services company that I worked for in Canada had 1 gig of memory and 512 megs of memory.

    They were upgraded a year later to 3 gigs of memory and 1 gig, respectively. Oh, and the second machine was JUST a developer server.

    I know of a major bank that has 4+ gigs of ram for their Internet & Telephone banking computers.

    Such is life in big business. $50 solutions for $5 problems.
  • Most of the evidence I've seen is anecdotal, and I too would like some hard numbers about Linux+Apache+mod_perl vs. NT+IIS+ASP .

    My anecdotal experiences tell me that overall the Linux solution would be better, but I know that IIS does thread management a lot better than apache [thread pool vs. thread-per-process] & hence typically serves up pages quicker when under lighter loads. Under heavier loads, NT will crash :)

  • As I think I've said before, I don't mind the unfavorable comparison to the high-end Unixes. As has been pointed out, Linux isn't trying to compete with the likes of Solaris or Tru64 Unix (except in the areas where these OSes are deployed where they are extremely overqualified). So, most of D. H. Brown's study doesn't really bother me.

    What kills me is the little proviso: "...as well as Windows NT".

    That's where the FUD really spreads. Linux's strengths compared to NT are dismissed as "anecdotal" or "unproven", while NT's strengths are taken at face value.

    The reliability thing is especially critical. In my view, this is one of the major things that elevates Linux over NT: its stability under heavy load. This is something that I've observed time and time again as an NT and Linux admin.

    And what really sucks is that everyone is willing to do a quick search for studies on Linux's reliability (turning up nothing), but no one is willing to do the studies. So, lazy people like these ding Linux with "unproven stability", while also dissing Linux on not having side-of-the-box features like "high-availability clustering", and assume that NT is more stable because it has these "side-of-the-box" features, even with its proven instability.

    The problem is that most admins prefer a single, stable box over an HA cluster where it makes sense. Sure, HA clusters are great for business-critical databases, but why should an HA cluster be needed for everything to get even basic reliability?

    And the worst part? No one seems to be willing to do the studies. So Linux loses in these asinine assessments every time.

    You know, I shouldn't really be pissed about this. We've come a long way already without the benefit of positive hype, and I don't doubt that Linux will prove itself in some enterprise setting and show all the naysayers. And even if it doesn't - even if it's the best-kept secret in the IT world - it'll keep going strong.

    But I do get irritated at people who publish irresponsible studies like this when my bosses at work tell me that they won't trust an "unreliable" solution like Linux and force me to deploy NT instead. If they forced me to deploy Solaris, AIX, or Tru64, I'd be a bit happier. But when NT is ranked alongside these systems, I get pissed, because it isn't even in the same league.
  • > Hmmm... I thought that at some point Intel made
    > it possible to do 36 bit memory addressing on the
    > i686 processors. This would allow access to 8 GB
    > of memory. Of course I could totaly be off my
    > rocker.

    Yes, but that involves gross hacks, thanks to
    yet-another-f*cked-up-design-from-youknowwho.

    NT mm engineers screamed "yuck" loudly, but will
    add support for that because they will be paid for
    the job.

    Linus (and "official kernel team") won't mangle
    Linux mm to support this. 3rd parties may, but
    anyone needing such amounts of memory should use
    clean 64 bit architectures anyway.

  • There are several efforts to list maximum and average uptimes. There's even a section in the high-availablity mini-howto on this.

    Fud. fud. fud.


    Even assuming the article is 100% correct, by not giving an alternative which is superior, you're being hypocritial, and admitting that linux is the superior solution by omission.



    --

  • Wouldn't it be nice to never fsck again? Or, atleast, to be virtually guaranteed that foregoing extreme funkyness, data will never be needlessly corrupted.

    In a journaling filesystem, changes to files are not activated until they have been cleanly completed. Thus, if half the change is still in the cache when the power supply dies, no part of the change actually occured. Many journaling filesystems also keep preset numbers of revisions in the history of a file, making it very easy to back yourself out of a mistake.

    There is a performance hit, 7 to 14 percent depending on what you're doing if i remember the specs on adding JFS to WarpServer 5. But journalling filesystems are a must for many data wearhousing applications where you simply can't afford the possibility of corruption.

    All in all, this article doesn't look anti-linux. It's not particularly useful, and doesn't go out of it's way to encourage anybody to use linux, but honestly, look at what it's actually saying.

    There's no hard data regarding the performance scaling of SMP linux systems. Well, there isn't. Whaddaya want? publishable results are more expensive than you think. If they had done their own research in-house you would have jumped all over them for it, no matter what they said. It would have been a serious issue of credibility for them.

    There's no hard data on long-term reliability of linux. There isn't. See above.

    Then there's the issue of "high availability" with linux. This was an "ask /." just the other day. There's tons of different ways to do it, but right now, it takes a dedicated geek to set it up. RedHat saw fit to create and release ExtremeLinux for Beowulf clustering, why not a High Availability Linux distribution as well? Since I'm not terribly fond of RedHat lately, it may as well come from Caldera, PHT, or SuSE. (Yeah, Debian could do it too, but since they've got no financial interests, they've got no financial interest in doing it).

    All the tools and parts are there. Somebody just has to expend the resources to build the things this report is missing.
  • Check out one of the links on that page:
    "Unix trounces NT" http://news.com/News/Item/0,4,29416,00.html?st.ne. ni.rel

    Judging from the two articles, it seems DH Brown is a big fan of old school unix. So yeah, if your company can afford AIX/Solaris/etc, they seem to be suggesting you should pay for the reliability and scalability. I don't mind linux not matching up to the big boys yet...in fact, I'd rather see that at this point, this way Sun can ship Linux and help improve it while maintaining a high-end product that still rakes in the bucks. That way linux is seen as a great entry-level product going up against NT, and the cadillac is there when you start making money.

    I'd rather see linux grow as a desktop OS, and hopefully the big companies will start to filter up some of the linux gifts that make it a better desktop OS (ie improvements in gnome/KDE, word processors and PIMs etc.) and maybe toss some money that way.
  • A few months ago, Linux was "relegated" to being useful only as a non-critical webserver - now it's usable for: "file and print servers, Web servers, low-cost number crunchers for scientific computing, and inexpensive, limited-function "thin" client computers."

    And so far as the other criticisms... Enterprise computing is still uncharted territory for Linux. It's going to take time before people start accepting it as being as stable as the other Unixes... But at least it's now being considered in the same league.

    SMP - I forgot where I read it - maybe on the FreeBSD site - but FreeBSD outperforms linux SMP by 17% - so obviously there's room for improvement, no?

    The other limitations may only be a hardware issue (memory support, etc...) but they're still an issue. Just because Linux is hampered by the hardware it runs on, it doesn't mean that it's not an issue affecting the adoption of Linux in the enterprise workspace.

    Overall, the article, in my eyes, more pointed the way towards where Linux needs to go in the future. Too bad it's on CNET where if it's not good, it's gotta be bad...

    -----------------
  • OS X server has been out what, 3 weeks? Before that, Apple didn't have a server platform. Apple also has an extremely busy site. Switching OS's on a site that get's that many hits is no trivial matter.
  • That's wonderful. I still don't see this being a real common thing.

    I thought this was comparing Linux and NT on intel hardware for common applications. The memory argument starts to head into the area of Intel, Sun, SGI/Cray which I think was beyond the intent.
  • Good point. There are likely a lot of /.ers that haven't bothered to go to the source (or got hit with the /. effect and couldn't). As a matter of public record, we are all arguing the executive summary. If you go to the site, you have to fill out an ID form to get the summary. You are then invited to purchase the full report, for $995.00 (not $9.95). Nobody I know has coughed up the k-buck, so we're all looking at the executive summary. Normally, I'd be peeved at people judging the book by its cover. However, the above shows the extenuating circumstances. I'd love to see the real report, but I wouldn't love it that much. Does anybody have access to the full report, or money to burn for same? I'd assume that a repost would be mondo illegal, but some intelligent points from it would be appreciated. Until then, we'll have to argue the stuff that we can read.
  • by Anonymous Coward on Saturday April 10, 1999 @01:41PM (#1940809)
    Windows NT Server Enterprise Edition can support user address spaces up to 3GB for specially made executables. Vanilla Windows NT allows user address spaces of 2GB. I'm not positive, but I believe NT can use a full 4GB of physical RAM. I've also heard that there is a really obscure version of NT that employs some sort of hoarfy paging scheme to use the x86's 36 bit addressing mode for physical RAM (in my opinion this feature is quite useless). Note that I hate NT.

    x86 Linux has a 960MB user address space limit in a default kernel. It is possible to recompile a kernel to allow 2GB of address space per process. Linux has a tradeoff between physical memory installed and user process address space, their sum cannot exceed 4GB. Hence allowing 3GB of address space for user processes limits you to 1GB of physical RAM making this kernel configuration mostly useless.

    My understanding is that Linux on 64 bit Alpha encounters difficulties around 2GB of physical memory due to PCI limitations. This doesn't sound that fundamental an issue, but it isn't a simple kernel recompile to fix it. Hopefully this will be ironed out soon.

    Brian Young
    bayoung@acm.org
  • by Kaz Kylheku ( 1484 ) on Saturday April 10, 1999 @12:29PM (#1940810) Homepage
    Linux has a 4GB address space. One gigabyte is dedicated to the kernel.

    WinNT cannot give 4GB to an application; that is
    another lie. You need a special configuration of NT server just to have as much memory as what Linux makes available.

    Secondly, Linux is a 64 bit operating system on 64 bit machines. It's a pure lie to say that some commercial UNIX can have 128 gigabytes of memory and compare that to Linux on Intel. I mean, for crying out loud, doh!
  • by RelliK ( 4466 ) on Saturday April 10, 1999 @01:29PM (#1940811)
    Ok, let's see

    Missing from Linux are high-availability features that would let one Linux server step in and take over if another failed;

    I'm not an expert on the subject but I've seen the high-availability howto at www.linux.org/help, so I know it's possible. Can somebody else comment on the subject?

    full-fledged support for computers with multiple processors;

    uhhm, did they even test this??? Last I checked Linux beats the crap out of NT on SMP...

    and a "journaling" file system that is necessary to quickly reboot a crashed machine without having to laboriously reconstruct the computer's system files.

    OK, that's true. Linux doesn't have jornaled file system. On the plus side, it doesn't crash very often... NT does have a journaled file system, but for some reason it took longer for NT to test the file system after the crash then for Linux to run fsck after the maximal mount count.

    Currently Linux can't use more than 2 gigabytes of memory, and in some cases only 1 GB. Windows NT, on the other hand, can address 4 GB of memory

    This is either a deliberate lie or a lack of knowledge (or both). First notice that he doesn't specify the hardware. It is assumed that there is only x86 in the world. Also notice the difference between "use" and "address". Linux *can* address 4 gig of RAM on x86, just like NT. The memory is split between kernel and user memory. By default the split is 2-2, but you can change it to 1-3 (1 for kernel 3 for user) by recompiling the kernel. NT works in *exact same way* with only one difference: you can't recompile the kernel. In order to get a 1-3 split you need to buy "enterprise edition".

    Now, Linux also runs on 64 bit platforms, such as Alpha, SPARC and PPC, and it can take full advantage of 64 bit memory architecture. On these platforms Linux can address 2^64 bytes of memory -- I don't even know how much that is...

    NT also runs on Alpha. But (surprise, surprise!) it's memory model is 32 bit. That means that even on a 64 bit platform NT cannot address more then 4 gig of RAM!


    This whole article is pure BS. Notice that they pretty much just say "Linux sucks" without thorough comparison of any kind. Does anybody know where to send email for rebuttal?

  • by IntlHarvester ( 11985 ) on Saturday April 10, 1999 @01:26PM (#1940812) Journal

    Data point - I'm aware of a couple NT boxes, and one Solaris x86 with 2GB. So its more like 99.5%.

    While that may be top end now, look forward a few years, and it will be much more common to see this amount of memory on x86.

    I'm not really aware of the issues involved in this, but I'm sure if turns out be a problem with Linux, they can slipstream a fix in to 2.3.666 or whatever. Commercial operating systems would probably do it at a major version upgrade, which NT won't see for a while after 2000 finally ships.
    --
  • by IntlHarvester ( 11985 ) on Saturday April 10, 1999 @01:21PM (#1940813) Journal

    It's true that Win2000 junks the Domain security model, but that has nothing to do with high availablity or journaling.

    (A WinNT Domain is a common list of user/groups shared by a number of computers for login and ACL purposes.)
    --
  • by Anonymous Shepherd ( 17338 ) on Saturday April 10, 1999 @02:02PM (#1940814) Homepage
    It seems news.com provided an review of a review...
    It's pretty bad that their report, news.com's, is uneven, though it highlights both strength and weakness. It doesn't explain their own reporting, much less Brown's.

    Someone mentioned the pdf; is it available?
    You have to pay for the real report... and there's a form I just filled out for an executive summary...

    I can accept their claim that for 'enterprise computing' that other unices beat it, but not that 'Windows NT holds an advantage'. Price/performance, stability, and interoperability, from hearsay all over the web, seems to be Linux's strength against NT deployments.

    Linux definitely seems to lack in robust SMP, or as they say, 'non-trivial SMP scalability', except I'm not so sure that NT qualifies. Isn't NT limited to 2 or 4 processor Intel solutions, which are themselves not quite so hot for enterprise level computing? As compared to bigger Sun or Alpha solutions? Maybe someone can correct me and tell me about a distribution of NT that runs on 32 processor Intel or Alpha machines at a reasonable cost and at reasonable performace and up time?

    As for journaling, high availability clustering and such, I guess that much is true or under development... But I still don't believe that they think NT satisfies their requirements for an enterprise level computing solution!

    Anyone care to correct me?

    AS
  • by Stu Charlton ( 1311 ) on Saturday April 10, 1999 @01:11PM (#1940815) Homepage
    As was discussed when this story was up a few days ago, this study isn't FUD. It makes an honest attempt at comparing commercial Unicies & Linux. D.H. Brown has no particular love for NT, either.

    The fact of the matter is that Linux's SMP support isn't on par with Solaris' (4 CPU's vs. 64), Linux's high-availability clusters aren't on par with *any* commercial UNIX (yet), and Linux's filesystems aren't journaled. (yet)

    Some day (Kernel 2.4/3.0) all these features will probably be there, but let's not start touting vapourware over other solutions. Open source can only combat FUD if the code IS THERE. Right now, it isn't.

    "Use the right tool for the right job" - Linux [on Intel especially] isn't it for sites that need extreme scalability & high availability. (A Sun Ultra 10k, AS/400 parallel cluster or S/390 mainframe is better suited to those environments.)

    Ditto for sites that need to run a transaction processing monitor (like BEA Tuxedo) or a high end application server (like Apple's WebObjects). Though, this is changing... I think BEA is thinking of a Linux port... and WebObjects on Mac OS X Server is pretty sweet.

    [ though not 100% open source, but nothing open source comes even close to Tuxedo or WebObjects in terms of performance, elegence, reusability & developer tools. Perhaps the GNUstep project will adopt the WebObjects framework as another pet project.. ]

  • by Lord Greyhawk ( 11722 ) on Saturday April 10, 1999 @12:37PM (#1940816)
    Executive summary pdf link:
    http://www.dhbrown.com/dhbrown/downldbl/linux.pd f

    This contains FUD at higher level than that found in ZDNet.

    Pay attention to how SMP and Linux is "covered" on pages 7-9
    begin quote:
    By boosting the number of locks to somewhere between 10 and 100, the Linux 2.2.5 kernel used by OpenLinux 2.2 should improve its SMP scalability somewhat. But while Linux 2.2 systems can boot on an SMP system with up to eight processors, useful SMP deployment at current levels of granularity has not yet been proven. Little industry-standard or even proprietary benchmark evidence has emerged that demonstrates the performance improvements of
    database or Web server applications running on SMP systems under any Linux distribution. Although Linux has been tested on a variety of SMP systems, booting on eight-processor systems is far
    different from demonstrating improved performance on mixed throughput workloads or multi-threaded database applications.
    :end quote

    Rather than doing RESEARCH and STUDY, they merely report the # of CPUs used in previously published NT and Commercial Unix benchmarks. (They do not print the actual benchmark results here). The number of CPUs used is a virtually useless comparative benchmark. Since they selected two benchmarks where there are no previously published Linux results, they report nothing for Linux. This is used to portray Linux as hopelessly inferior, without actually having to do any work. Check out how they put Linux at 0 CPUs on the graphs. I thought only Microsoft would do something so obviously corrupt and shameless.

    Method: Claim Linux is inferior. Do no benchmarking yourself, but make the lack of data for Linux sound ominously bad. Put in some fancy graphs of useless values selected only for their ability to make Linux look worthless at first glance.

    It is amazing people will pay DHBrown for a report of this quality.
  • by Anonymous Coward on Saturday April 10, 1999 @12:09PM (#1940817)
    Here's the article [news.com].

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...