Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

WSJ Says Linux Lags 281

TroyD sent us a link to a WSJ Article on Linux that Says Linux is Good, but that it lags behind its rivals. Troy also sent a choice quote from the article: "...Linux currently lacks some of the features demanded by corporations. [...] Among them are the ability to run simultaneously on many processors in a single computer and to keep a log of what the computer has done." Cool. I can save a lot of diskspace: rm -rf /var/log.
This discussion has been archived. No new comments can be posted.

WSJ Says Linux Lags

Comments Filter:
  • by Anonymous Coward

    http://www.dhbrown.com/dhbrown/linux.html

    D.H. Brown has ZERO credibility. This is just dreck written by salespeople.

  • by Anonymous Coward
    I think the problem with this article is that, based on the introduction, and the tone of the article in general, the statement about Linux not scaling to many processors seems to have an implied ".. when compared to Windows NT" attached to it. I don't know if this is what the author indended, but in any case this is not true.

    Linux scales better than NT, at least for 2-4 processors (my limited experience). Of course there are commercial Unixes that scale to hundreds of processors, which linux doesn't do - yet - because it isn't a big issue for the vast majority of linux applications - yet. If you're going to run a 50000 user database server, you won't run Linux - but you definitely won't run NT either - the choice will be AIX or a Sun Enterprise server or something like that.
  • by Anonymous Coward
    There is some info here [laney.edu] and here [cpoint.net] if anyone's curious.
  • by Anonymous Coward
    After all, thats how MS puts out its code. An infinate amount of monkies pounding out source :)

    Nah... An infinite number of monkies pounding on typewriters could turn out Shakespeare.

    Microsoft? Sixteen monkies sharing a fountain pen and a case of beer.

    Take away the pen and you've got ZDNet.

    Replace the beer with Jolt and you've got Slashdot. ;)

    -D
    dcross@cryogen.com

  • by Anonymous Coward
    ...is the author of the article.
    I ran a search on him at the archive on wsj.com and came up with these headlines:

    "Linux System Performs But Has Some Problems"
    "Linux System Makes Waves But Has Limits"
    "Linux Operating System Gets Big Boost From Support of H-P, Silicon Graphics"

    I could not read any of them since you need a login (did try cypherpunk of course :-), but judging from the article I could read, this guy does not have a clue. And thy shall not write about things you do not know. I would love to email him and explain some of Linux features and I'm sure that some of you would fancy that aswell, but I cannot dig up his email adress. Please help! Dig dig dig! and post his email as a headline! A password to the articles would also be nice. /Patrix
  • by Anonymous Coward
    The study compared NT and Linux. So they choose the playing field. Neither system is even slightly prepared to serve at the very peak of enterprise level work.

    The WSJ article claimed to be looking for systems suitable for "the worlds toughest jobs". It should have disqualified both Linux and NT. Since it did not, we can dismiss their assesment as flawed at best. To go on and suggest that NT was "better" than Linux by using a study that did not purport to study the stated assumption, suggests FUD.

    From what I hear, the WSJ is a Microsoft first shop. If they can't beat a developer into using NT, then they'll settle for Sun. Using Linux is evil and career limiting. Soon after said career is limited, it will be ripped out and replaced.
  • by Anonymous Coward on Tuesday April 06, 1999 @10:16AM (#1947272)
    The underlying study is actually a very responsible view of how Linux stacks up against the highest-end commercial OSs. It says that the main shortcomings of Linux are that it lacks proven SMP ability (we're talking 6 or 8 processors here), file system journaling (which is what the clueless WSJ meant when it talked about "keeping a log of what the system has done"), and very-large file support (like, on the TB scale). The big problem with SMP, apparently, is that it takes years of working hand-in-glove with the hardware mfgr to get a properly tuned SMP system. Probably correct.

    While the study ranked NT at about the same level as Linux, it expressly did not consider stability. The study said that stability evidence was almost entirely anecdotal, and there were no real MTBF studies to review.
  • The WSJ article was just plain stupid...

    The version I read in the Globe and Mail [globeandmail.ca] was pretty terrible: it had a worrisome headline, ("Linux not quite up to snuff") six generally accurate and positive paragraphs, but its author obviously ran out of brains before column-inches, misquoting the study as has already been demonstrated.

    I think this was a case of "balance": if something's good, we have to say something bad about it. Unfortunately, they said the wrong things, because they didn't understand what the study was telling them.

    The Wall Street Journal ("WSJ") is a "journal" (diary) about Wall Street. It is very popular with people who trade stocks and invest -- indeed, it is more popular in some circles than the National Enquirer. Unfortunately, WSJ isn't printed in English, and doesn't offer home delivery.

  • If the person were transliterating, we could try to sound it out and perhaps determine what was actually said. Unfortunately, what we've got here is an incompetent translation.

    It's kind of like calling "Halt, stranger, or die!" a greeting. It's not entirely wrong to do so, but there's a great deal of important detail that's been ignored.

  • My biggest beef with NT is the filesystem. How can you have a system where when one file is corrupted, say NTFS.SYS or other important file, with a filesystem that you cannot write to from DOS or anywhere, you have to reinstall the entire OS (the repair bit fails many times)?

    It's ridiculous how anyone is willing to put their entire company on that OS...
  • "NO ONE paid for this study."

    Heh, it would be funny if no one bought the full study as well. All that work, and no $995 checks to make up for it.

    Anyway, no one paid for the study beforehand, but they are in it for the money, obviously.
  • The article makes some interesting assertations and provides some good constructive criticism, but unfortunately the criticism was dumbed-down to the point of creating misinformation about what Linux CAN do properly.

    They mentioned logging capabilities, and to that I assume that they are referring to a journaling file system, or audit trails, both of which truly are missing on Linux. The trouble with the assertation was that it was too broad, and I believe this stems from an attempt to write about a technical subject to a non-technical audiance. The average WSJ reader doesn't know or care what an audit trail is, or what a journaling filesystem is and why you'd want one. The author's solution was to simplify to the point of PHB understandability. Unfortunately this led to an article that gave the impression that Linux has NO logging abilities, something that would petrify any reasonably responsible executive.

    The SMP allegation is a bit trickier for me to understand. I would presume the author was referring to big-iron 64 and 128 processor type scalability, however this article also implies that it lacks even 2 or 4 way scalability, and implies that NT has greater scalability.

    I'm not sure how best to respond, I wrote a paper letter to the WSJ since I'm a subscriber to the print edition outlining my concerns with the article and the possibility that there was some misunderstandings due to the technical nature of the article. Hopefully things like this will eventually disappera.

    The most important thing is to keep focus. Write code. If you can't write code, test code. If you can't test code, write documentation. Just fine something, even the smalles thing, and do it. Anything we do to work on improving Linux and the related applications will solve many more problems than explaining to reporters the errors of their ways.
  • Yeah, Linux will do per-processor accounting. Try "man acct" and see what it says. The author of the WSJ article and even you can't seem to RTFM.
  • Well, with SGI/Cray out front. 32 processors? Try closer to 265 and beyond. NT doesn't scale well beyond 4 processors, and even at 4 you can see the scaling issues in NT. Worse, the Intel x86 arcitecture leaves a lot to be desired, especially when you start scaling to tens of processors. Just look at the kind of contention you see on a PCI bus on a single processor system running a moderatly memory and graphics intensive process. Even if you scale a Linux/x86 box to XX processors, you may not be able to use most of that power because the chips are always waiting for memory/IO/communication with the video or audio subsystem.
  • As to logging: Most free Unixes do a pretty good job of logging, although most commercial Unixes do better. The difference isn't substantial, but it is the kind of stuff you can only do with intimate knowledge of the hardware. As compared to NT, every Unix flavor I've seen blows the pants of NT w.r.t. logging.

    AFAIK Linux doesn't have a general system level statistics logging ala Performance CoPilot. This makes Linux a little more difficult to determine exactly where the bottleneck is on a loaded system. (Is my system slow because I'm out of System Time, or am I always waiting for disk I/O, or maybe the PCI bus is jammed up?)

    As to your aside: most big systems spend a very small fraction of their time with all of the resource management tasks, and these tasks are necessary when you have to determine what to add to a system. It is extremely difficult to determine where the bottleneck is in a system when you don't have statistical information on every subsystem over a period of time, especially when the bottleneck isn't something obvious like memory or CPU time, but something more fundimental like contention on your SCSI busses or memory bandwidth issues.
  • Posted by NJViking:

    According to Netcraft:

    www.msnbc.com is running Microsoft-IIS/4.0 on NT3 or Windows 95

    Well there you have it. The silly website is running IIS v4.0!

    NJV

  • Posted by LordPraetor:

    I would certainly run a 50,000 user database server on Linux... I just wouldn't use the standard login apps. Depending on what you are doing, my Dell PowerEdge 2300 running v2.2.3 and MySQL could handle a db of that size easily. I wish I could say the same of my Dell PowerEdge 2300 running NT4...
  • Posted by LordPraetor:

    Like a graphical interface? (X)

    Nuff said
  • Posted by !ErrorBookmarkNotDefined:

    This letter goes directly to a source used in the article. This letter is directly on point. It clarifies one of the central questions about the MSNBC article: "Where did this information come from?"
    And it gets a score of 2, just a hair above some AC posts?


    -----------------------------
    Computers are useless. They can only give answers.
  • Posted by patg:

    Has anyone refuted them? Is there any way to publicly show them to be in error?

    This falls into the disinformation category. This is about the most uninformed article yet.
  • Posted by Nino the Mind Boggler:

    "This article is automatically bad just because it's posted on MSNBC."

    C'mon folks. We have to do better than that. I don't know the details on what Linux is and is not capable of, so I need some assistance sorting out the FUD from the valid criticism. What is Linux capable of as far as SMP and logging? What are Linus, AC, et al working on in those areas?
  • I think many of the problems you note here actually have little or nothing to do with Linux. If you are trying to compare Big Iron with any Intel-Linux combination, you're going to find Big Iron more scalable. But I think most if not all of the time this is due to a lack of scalability of the x86 architecture and the components, system boards, etc. that comprise it.

    UltraLinux is known to run on a 16-processor UltraSPARC box (an Enterprise 450 IIRC) and that same version of our beloved OS will theoretically run on a 64-processor E10K, though AFAIK nobody has actually done so - they're rather hard to come by. Such a machine certainly constitutes Big Iron, though, and I'd love to see the results, whatever they show, of Linux vs. Solaris on such a box, or Linux-E10K vs almost any mainframe solution.

    Still, since all hardware supported by NT is also supported by Linux, to rank NT ahead is ridiculous. It would be nice to know exactly how DH Brown arrived at its rankings for "general business computing." Size of monetary contributions to the DH Brown slush fund? Cost of the OS? Survey of PHB's? Engineers? Users? Or did they run real, objective tests in the field or in a lab? And even if the latter is true, what tests were run and how? The article does not even attempt to inform us what the DH Brown report was attempting to assess, much less disclose any real information. There are cases where Linux may lose, especially on Intel; that's fine. But any report that ranks NT ahead of Linux has a pretty bad odor about it, regardless of whether we are considering Big Iron.

  • I noticed Alan Cox mentioning on the Kernel mailing list that Linux 2.3 was going to add many of the features mentioned in the above post, including support for more then 65k users. Sorry I dont have the exact URL.
  • MSNBC have run quite a lot of articles in frank praise of Linux and critical of Microsoft; they're nothing like Slate who really do seem to be an openly biased FUD-organ.

    This article isn't theirs; it's a reprint from the Wall Street Journal, who are in turn quoting a report by a "consulting group" that no-one's ever heard of and that clearly don't know a damn thing.

    --
  • The other, and often overlooked issue, is Linux's ability to run on >> 2-4 procs. For many larger businesses, their core systems run on many processor machines that are designed to be extremely fault tolerant and stable (up times > 10 or 20 years are common). Beowulf-style clusters aren't really an option. The article (which was posted yesterday) is more about the appropriateness of Linux as a replacement for S/390 and AS/400 type servers.

  • >The first problem: "Run simultanously on many processors" has been answered by SMP.

    According to the SMP FAQ, Linux's SMP support has been tested on 4 processor systems and theoretically should support up to 16 processors. While this is a reasonable number, it probably falls short of being "many."
  • I agree! This article should be the top-scoring article in this thread!
  • I'm reading the exec summary of the report right now, and I have a question for anyone out there w/ experience in various unixen and nt.

    First, the premise of the report is that Linux is not enterprise-scale material, right? Okay, the intro paragraphs from the summary end with this conclusion - after mentioning the oses.

    And I quote:
    They(the two Linux distros) both fall short of the coventional production-grade implementations of proven, non-trivial SMP scalability; high-availability clustering capabilities; journaling file systems; logical volume managers; large files; and many other less significant but useful functions. end quote.

    Uh, and NT Server 4.0 Enterprise Ed. has all this? Can someone tell me if this is true, yes/no?

    Another question. This summary doesn't seem to make any real judgements about the oses in terms of how well they work, what you need to get all these wonderful goodies in terms of hardware resources, training, administration. I find this rather a large oversight. I mean, MS might be able to offer everything, but how well does it hold up, and what kind of resources do you have to throw at it to get to this scale; what about TCO, ROI, etc.? I agree that enterprise scale needs big iron, and Linux ain't there yet, but I also believe that nt ain't there neither. I think I have a problem with this approach. Also, the clueless journalists who "summarized" the report, or summarized the summary most likely, really should have kept their fingers off the keyboards and on their dicks and just provided a link to the free pdf file.

  • A report by D.H. Brown Associates is used as a source for John Kirch's NT Server 4.0 vs. Unix [unix-vs-nt.org] page. They don't seem to be too biased. On the other hand, I couldn't figure out how Linux came in under NT. Even if they were talking about journaling, NT doesn't do that either, and Linux DOES do SMP. They either got their facts screwed up, or they had some reason that they didn't present in the article.

  • As near as I can tell from the executive summary, Linux lost to NT mainly because there wasn't enough information available about its performance. The requisite studies and benchmarks didn't exist for Linux, so it lost in those categories by default. The executive summary claimed that NT has a journaling FS. Is this even true? The only other thing that NT seemed to win out on is clustering for web and database serving.

    They apparently went strictly on functionality rather than price too. While they do tell you that Linux is can be used to build a Beowulf supercomputer, they say that NT can do basically the same thing using public domain software packages. What they forgot to tell you is that for the price of all the NT licenses you'll need for that, you could buy a real supercomputer.

    I think the report tried to be objective, but I also think they cut NT a lot more slack than they did Linux.

  • They used OpenLinux 2.2 which uses the 2.2.5 kernel for the review, as well as RedHat 5.2 with the 2.0.36 kernel.

  • I believe the report actually said that Linux didn't do SMP well, and that it didn't have a journaling FS. Which, oddly enough, it said that NT did have.

  • by Danse ( 1026 )

    Didn't look at the date, but the report does claim that they used Open Linux 2.2 with the 2.2.5 kernel. Perhaps they are confused?

  • Well, at least in the amount of space that is required and the number of crashes per day.

    Misfit
  • How much was the Microsoft employee paid for this and how long will it take for the person who approved this article to get used to their new job title?
  • That's not what they said. They *SAID* linux does not do logging,
    and everybody knows *EXACTLY* what was meant by
    it.

    They *MADE SURE* everybody knew what they meant
    by it.

    Read the next few lines of the WSJ article where
    they tried to explain their comments...

  • Yeah, numbnuts, it's a subscriber-only document. The available one doesn't have a discussion of Linux.
  • And it tells just about zero regarding their methodologies.

    Yeah, there's more to a study than the results. You must manage something. Pity.

  • Linux is a good OS. Probably the best available for the Intel archiecture.

    However, it does not come close to offerings from Sun or IBM for SMP scalability, security, and high-availability [i.e. fail-over clusters].

    Wanna fix this? Join the Linux-High Availability project. Write a journaled filesystem. etc. Because for NOW, this isn't FUD, there are the hard facts.
  • go to www.infoworld.com and read their take on the D.H. Brown study. They focused on commercial unicies. NT was included for balance.

  • I'm very glad to hear this. I think this was my point.. we need to counteract FUD by "just doing it" - creating the features the market thinks it needs.

    Of course, this is for Linux 2.3... and it will be a long... long.. time before it is "produciton quality" and released in Linux 2.4/3.0 (12-18 months?) So my observations [about Linux not competing with high-end unicies] were very valid for the time being.
  • by Stu Charlton ( 1311 ) on Tuesday April 06, 1999 @11:11AM (#1947310) Homepage
    - You can't run a 64-processor SMP box on Linux.
    - You can't get a government B1 security rating on Linux (You can on "Trusted Solaris" or on AIX)
    - Inclusive with the above, we need a journaled filesystem
    - You can't get highly-available failover clusters with Linux. [though the linux-HA project is working on it]
    - You can't get single-system-image clusters for scalability with Linux (Beowulf uses a low-level messaging API that essentially ties your app to Linux)
    - You can't have terabyte files for large databases [that means no data warehouses]
    - You can't have > 100,000 users on a Linux box for very large networks [Solaris & AIX can]

    etc.

    By now you're all probably hopping mad at me, but please folks: take a deep breath. Is this really FUD? Or is it merely pointing out some small nitpicks? My, my... people are so quick to criticize and yet so hyper-sensitive to their own medicine.

    Let's get real: we're only talking some minimal feature-lack, and not very "widely used" features at that. Wanna fix it? Contribute code. This is how Linux makes FUD irrelevant - not through whining about the WSJ's misleading prose.

    The underlying study by D.H. Brown is rooted in fact, and it means one thing: we now have specific target areas that Linux "could" be improved, provided someone with the time+need will contribute.
  • They may be talking about a journalling filesystem.

    It's obvious that the article is the work of a technological nincompoop who is transliterating information obtained from someone else.
  • Yes - NT can do this, but the system log fills up awfully quickly on an NT server if you do it...

    To enable it, enter User Manager (no - really - stupid eh?), and change the Policies->Audit options.
  • by Matts ( 1628 ) on Tuesday April 06, 1999 @11:09AM (#1947313) Homepage
    First, here's [dhbrown.com] the original "Overview" of the report (the full report is stupidly expensive). Basically what they do (DH Brown) is compare on a feature for feature basis on shipping systems. If we go on this basis, RH5.2 lacks many commercial OS features, such as Journalling, high end SMP, Transaction services, Corba/COM integration, etc that these more expensive OS's offer. The msnbc overview glossed over the report quite a bit - the report actually stated that Linux is good for a lot of things, such as web services, even for high end systems provided you have a very close fit - that's what Linux is good for.

    There are issues with Linux, like shipping security out of the US that commercial OS's can get around with licences from the govt. That's a big problem for Linux - you can't just download, or buy for 2 bucks, a SSL enabled Web and News server. You can't even get it for $3000. Not that it's hard to setup mind - I've done it myself - configuring Apache for SSLeay was quite easy, but that's not what DH Brown measures - and it's not something that can easily be measured (unfortunately, for the free s/w croud).

    There are some serious shortcomings in the report though. Such as looking at 2.0, not 2.2 (hence why Linux appears to fall down in comparisons of SMP, large file support, Max memory support - 2GB in 2.0). The section on SMP testing simply has a big blank space for the performance of Linux - which makes it look like it comes last at first sight. I wouldn't mind betting it's better than NT with a 2.2 kernel.

    Unfortunately there's also the issue that the report just discusses the features that go into NT (and the others) that provide high reliability, such as HA clustering (which is shite on NT), resource management (also shite on NT) etc. They don't actually take into account how stable the system is in every day use. If they did, NT would come last.

    I'm not sure about how Linux is worse off than commercial Unixes vis-a-vis internet services. Can someone clear up what AIX and Tru64 offer over Linux in terms of IP protocols/tools, TCP/IP extensions, bundled web browsers/servers, bundled email servers and e-commerce tools. Perhaps it's just that very last one, which comes down to issues about SSL again... Sigh.

    Other than those things, I'm not quite sure how they have Linux so far down the scale. I'm inclined to believe they just got it plain wrong. What am I missing?

    Matt.
  • I can't help but wonder how much they paid for that add dressed up to look like content...
  • Gumpy,

    Unless Greg had agreed to let you publish his email address, it was most irresponsible of you to publish it in a public forum. Now his Inbox could get flooded with flames.

    The correct way would have been to ask people to email you, and then you should have forwarded the relevant emails.

    If, of course, Greg, had indeed asked you to publish his email address, please accept my apologies in advance.

  • Does NT actually do this? I think this is the crux of the issue. Linux can take critisism that other systems have journaling file systems. We don't like that we label as inferior to NT. Does NT logging actually allow you to see who edited files, who used programs, etc? I've never seen it under NT, but I'll admit I got my machine at working working reasonably well and tend to stay away from it's guts.

    Kashani

  • is bad :-) This clearly wasn't comparing Linux to the likes of Solaris or *BSD in terms of logging and SMP - more like OS/360 and other "Big Iron" OSes.

    Although the NT thing is still a mystery (didn't they take into account memory useage or stability at all?)
  • Mr. Torvalds has said that he is anxious for Linux to continue to grow ...

    Here's how you can tell the guy is lying. Has anyone ever seen Linus Torvalds anxious? And Linus's command of the English ideom exceeds that of most native speakers, so he would never say "anxious" when he meant "excited". No way did he talk to Linus. He's just making it up as he goes along.
    -russ

  • Those are all valid. But so are these:

    - You can't run a 64-processor SMP box on NT.
    - You can't get a government B1 security rating on NT.
    - You can't get high-availability failover on NT (although they're working on it)
    - You can't get single-system-image clusters on NT. Heck, you can't get *any* clusters on NT.
    - You can't have > 100,000 users on an NT box for very large networks.

    Had they said, "Linux is good, but is still lacking features and lags behind Solaris and Tru64 Unix", I'm sure we all would have nodded and agreed. But NT??

    Personally, I'm getting sick and tired of hearing people talk about Linux as inappropriate for an enterprise, and then talking about NT as an "enterprise-level OS". Sure, I'm all for criticizing Linux where it falls short. But let's have a little objectivity, OK?

    (Note to Stu: No, I'm not flaming you. You're right, of course. But this "NT" thing really has me burned.)
  • Pure and simple. But the problem is, it's really going to affect Linux's perception in a lot of many very influential minds. Does the gray lady have an agenda here? I remember when Steve Jobs was starting up NeXT, and the WSJ bashed the endeavour really badly. It essentially shot down NeXT's enterprise market for a long, long time, until he dropped the hardware. Some of the facts here could be contradicted by basic fact-checking of the kind normally done by the editorial team in a paper like the Journal - that is, the author writes the article, submits it, and the editors will do fact-checking. It looks like this article wasn't fact-checked. I wonder why. -p
    --
  • NTFS does do some journaling, but I believe it's of a limited level. It is not the same as Veritas or JFS, etc.

    For instance, if an NT(well or a standard Unix) partition of say 500 Gigs or larger were to go down hard. When the machine came back up, to run a full fsck on it would take several hours... You'd be down for the better part of a day or two.

    But with Veritas, it can recover quite quickly without having to do a full integrity check on the filesystem, and your back up and running in only a handful of minutes.

    As far as your other comment...

    NTFS also allows you to extend a logical partition with additional space from another physical partition.


  • NTFS has some journaling, but I think it may only be of the metadata. Windows 2000 is going to include functionality from Veritas, and may result in a full journaling filesystem. I'm not clear on that part.

    When the article discusses SMP, they're talking about larger scale than say 4 processors. More like 32 or so. Obviously IBM leads the way with this, with Sun close behind.
  • by sheldon ( 2322 ) on Tuesday April 06, 1999 @10:47AM (#1947323)
    When they're talking about logging, they are most likely talking about Audit logs.

    Who opened a file, who ran a program, who wrote to a file, when this occured, etc.

    This is a pretty critical piece for many businesses in terms of security policy, etc.

    Having a journaling filesystem available such as Veritas is also important, which is what some others assumed was being talked about.

    I'm rather amazed at the number of comments calling this FUD when nobody seems to be quite sure what is being talked about. Of course this lack of understanding appears to have been encouraged by the initial poster.
    • Linux doesn't run on multiple processors
      Linux does run on 2 or 4 processors. That's basically the limit based on presently available motherboard technology. What Linux doesn't do is scale like say AIX or IRIX does, where you can run on dozens, hundreds or thousands of processors. I think this will be partially addressed in the coming months as the big iron vendors start adopting Linux as part of their road map. They will have to modify the kernel to work on a switch based architecture rather than a bus based architecture, which will take some time.
    • Linux doesn't have logging ability
      Again this is partially true. The contents of /var/log is good for some things but there really needs to be a database to collate information and allow you to browse through it, as opposed to discrete files as exists now.
    • Linux may be there in 4 years or so
      I think if the Linux community decides they want to address these concerns then it will take much less time. Linux moves fast on things the community wants.
    • Windows NT ranked higher than Linux
      I would have to think this is true, though it didn't have anything to do with the scalability or logging concerns. NT has application support which at the moment Linux lacks. I think the ranking conclusion was misleadingly tied in with the above statements. It may or may not have been done on purpose.

    Anyway, somebody should post a response (a well written response, not "you're all morons and will be the first up against the wall when the revolution comes!") that does a fair disection of the article.
  • many = four.
    Just use 'fight training rules'. When one person encounters enemies, you count like this: one, two, lots, many, run away!

    Therefore, many=four.

    (ok.. so it's silly, but this discussion needed some humor).
  • If it's not spectacular, it's not woth writing about. So either "Linux on its way to world domination" or "Linux greatest hoax of the century" is their way to go.

    I wouldn't trust anybody who has spent the last 5 years with Windows, tried to install Linux once, failed at the first attempt and now tries to tell everyone his truth.

    I remember having quite a hard time when I installed NT for the first time (years ago). "Easy", "user friendly", "intuitive", they are all subjective impressions.
  • No.
  • In November '98 when the report came out (AFAICT)?

    Wow.

    They're wasting their time, they should be in fortunetelling or stocks...
  • a) Linux has lovely logs. Read 'em daily. Much better than the tripe you read on MSNBC. Lots of fun tools around too for gathering stats from your logs. Actually, many of the logs are maintained by separate programs. We should really say that _linux_ has a good log for kernel activity, and _wuftpd_ has a good log, and _apache_ has a good log. But all the data is there about everything your processes do. Most importantly, about connections and transfers via your services.

    b) Linux runs beautifully SMP (2.1.x and above). I worked on a dual system in school last year, and my friend has one. It's very clean. And it's amusing to watch linux switch tasks back and forth between processors when it's not doing anything, looking for better load balancing. Alan Cox says Linux runs equally beautifully on up to 16 or 32 processors, and I trust him.

    c) Linux does _not_ have a journalling FS that tracks everything that happens on the disk, guaranteeing your recovery of lost data, say after a hardware crash. NT doesn't have one either. Some big commercial UNICES do (AIX, what else?). I beleive Linus said it's a priority for linux 2.3.x, so maybe _maybe_ initial implementation by the end of this year.
  • No, you fight fire with water. And you fight FUD with the truth.
  • by bert ( 4321 )
    It may not always be as detailed as what you describe of Digital UniX, but as a matter of fact, one thing I frequently use Linux for is the 'hardware info goldmine' that /proc is. Combined with boot time and kernel module info, you can often find out a lot of nasty things.

    It does of course depend on the driver a bit. But when a win95, nt or novell box gives you hardware suspicions, a linux rescue disk can make your life much easier!
  • In NT4 if you right click on any file or directory and select properties then the security tab and the auditing button you get an auditing dialog box. It appears that you can specify any user(s) or group(s) of users and any action like read, write, execute, delete, etc. Presumably this gives you an extremely high level of surveilance over who's doing what to which files.

    A year or two ago I wanted to monitor access to a file on MP-RAS (NCR Unix). I needed to know who was reading the file and when. I was told by everyone I asked that it couldn't be done on Unix but it would be easy on a mainframe. (That advice didn't help much since I wasn't on a mainframe)

    So, if I wanted to put surveillance on any old file on my Linux box to get a log of who read that file, could I do it? If someone just reads a file without modifying it (using more, for example) does that get logged somewhere?
  • Roughly: a journaling filesystem logs all transactions made to hard disk. This makes it possible to ensure that things are in a well-defined state if the power gets unplugged, or some other disaster happens, at any point during the write.

    Journaling is frequently provided by high-end database programs, which grab an entire partition at a time and write to it however they see fit.

    Providing it at the filesystem level helps ensure reliability of all data, not just data stored in high-end databases.

    I recall someone was working on a journaling filesystem for Linux - a French graduate student, if memory serves - but I can't find the webpage anymore. Last I checked was a few months ago and it was still pre-beta.

    Jamie McCarthy

  • by BB ( 5050 )
    Yes, the article is wrong. But so what? I like that fact that mis-information exists about Linux, especially in a widely read newsrag. These kinds of articles will just help create a divide between the ignorant and those that can think for themselves. It won't change the facts at all. Most corporations will never help improve Linux - they'll just jump on whatever bandwagon the WSJ tells them to. Very few corporations have a reason to help improve Linux and free software. Who **NEEDS** them? I'd rather see them spend money on NT and have a good laugh then anxiously await mass acceptance by corporations.
  • I think this is what he was talking about - not system logs. However, there is an option in 2.2 for "BSD style process accounting". I have no use for this myself, but it sounds like the sort of thing he was looking for.

    Of course, he was not using kernel 2.2. If he wasn't going to judge linux based on the latest kernels, why review it at all? No one uses linux "out of the box" if their needs would be better met by compiling a custom latest kernel. It's a whole different mentality from the commercial world, and it didn't seem like he understands this.

    That said, who in the fuck would pay $1000 for this nonsense? If they weren't willing to even upgrade kernels, one wonders how much work they went into testing ... it smacks of "we're too lazy to do any testing, so we'll compile a bunch of info you already know, and charge you $1000 for it". What a joke. I need to start my own fudly Gartner Group clone.

  • Actually, it lists Beowulf (not by name, but by description) as one of the "four areas" in which Linux excells. I think that when the author refers to multiple processors in the same computer, he's talking SMP.
    Christopher A. Bohn
  • Why don't you read the actual report before forming any judgements. I know for a fact that the authors of the study have been working VERY hard on this report. There is a *very* long track record of OS comparisons by Tony Iams that most of you will ignore, but are, and have been, taken VERY seriously by most of the top executives and developers at MS, Sun, IBM, and Compaq. Trust me. There is NO bias. Yes, this particular article appeared on MSNBC, but it is a link to the WSJ story. Claiming any sort of FUD or bias when most of you haven't even bothered to read the summary or full report (btw, NT did NOT place "first" in this study, so get off that point), is pure ignorance on your part.
  • The author of the report is a long time programmer (from what I know, he was programming WAY before any of you and holds a degree from Carnegie Mellon) and probably knows more about any OS than most of you guys. So, yes, if he is the one on that panel, he will be speaking the same language.
  • http://www.dhbrown.com/dhbrown/linux.html

    Now, instead of making the FUD statements, etc. why don't you take a look for yourself. Several things, from waht I can tell, were taken way out of context in the WSJ article. Go take a look and then talk.
  • As I said, read the report before making judgements on it based on an article written by the WSJ. Do you really think they (the WSJ) know about Linux in the deepest technical sense? Ha!
  • As someone who works with them, I can tell you for a fact that NO ONE paid for this study. They've been doing them for years. That's one of the reasons you have to PAY for it.
    Don't start slandering a company you know nothing about. It can get you in trouble. Why don't you do a search for a report the author did comparing AS/400 to NT a couple of years back and you will see there is no funding done by M$.
  • Well, genius, that IS the point of running a business. Is it not? And that's what they do. Stop trying to make it something it's not.
  • Apparently he didn't. I am amazed at the stupidity by some people basing their ignorant flames on a WSJ article that completely misquoted and/or misunderstood the report. Obviously, this genius shouldn't be IN the position to dictate what services his company purchases since he has no clue what the report says. Gee, that's smart. Base your purchasing decisions on a newspaper article. I doubt DH Brown would want your business anyway....
  • (Before I go on... Yes, I actually do make a living in the IT industry
    selling my "expertise" to industry vendors. I keep my identity private for
    a reason).

    Actually, I *DID* read the report, hot shot. And I feel given their
    methodology, the report was fair and valid. No, they didn't do in-house
    testing. Why should they? What is to be gained by that? (They also probably
    cannot afford to buy an AlphaServer, Sun Enterprise server, SGI Origin box,
    and an RS/6000, but that's another issue...) PC Week and Info World labs do
    it for everyone to read. I don't believe you will find any kind of "score
    card summary" from anyone else. And considering the work Mr. Iams has done
    in the past (which I doubt you have any knowledge of) I consider this to be
    an excellent feature-for-feature comparison. You can argue till your blue
    in the face about whether or not you agree. But I simply believe that this
    is a credible study and report. What it appears to me is that, like many
    others on here, you can't accept the fact that Linux isn't the
    be-all-end-all fo OSes. I certainly do not discount it's power. But let's
    face the facts that it still does NOT compare to AIX and many other Unices
    that have been developed over the past 30-whatever years. I certainly
    expect it to be an "enterprise-class" OS in the forseable future, but you
    sure as hell don't see anyone running a Linux box to handle, say, an HMO's
    data center. Do you? I'd call that "enterprise class."
    Now, you criticize DHBA for using industry standard (read: industry
    *recognized* benchmarks that have been developed over a long period of
    time...) benchmarks to gauge performance. Well, what do you expect them to
    do? Spend money and resources they most likely do not have to come up with
    their won performance benchmarks? I'd like to see _you_ do that. Even if
    they did, you'd jump on them because they were using something that no one
    else in the industry recognizes as a standard! (FYI, it usually takes about
    3 years of intense debate and testing by a consortium of vendors and
    programmers to certify a benchmark such a SPECweb96).
    Also, I don't see why you think this firm is putting down Linux and trying
    to defend some sort of status quo. The study simply points out what the OS
    currently lacks and where development efforts need to be focused. They
    point out the plusses AND minuses of EACH OS. As someone else stated, this
    study can serve as a form of roadmap for Linux.

    Anyway, I am done. Enough time wasted on this topic...
  • Linux scales to 16 processors, IIRC. NT can scale to a theoretical 16 as well, although it's pretty much limited to 4 under Intel and 8 on Alpha.

    Solaris is designed to run on up to 64 processors (an E10000 Starfire), each of up to 400MHz. It's only really IRIX and Cray stuff that goes over 64 processors, although you could argue that Beowulf can go that high, but that's clustering, not SMP.

    Journaling would be good, as it would dramatically improve the reliability of linux under system crashes (rare under linux) or other hardware failures (isn't a lot you can do if the power supply dies). Bear in mind, however, that Solaris has only just put this in by default; before you had to pay for things like Solstice disk suite to get a journaling filesystem, so linux doesn't lag that far behind.
    --

  • Anything written ambiguously and/or poorly is FUD wether it was meant to be or not. FUD ; Fear,Uncertainty and Doubt. Ambiguous writing seems to cause Fear, Uncertainty or Doubt in the minds of those who read it seeking objective information.
  • In which case, the complaint would be justified.

    Solaris has auditing and ACL's for 'Trusted Solaris', though I have not (yet) used it. This is how one obtains B1 security for Solaris.

    VMS has done auditing correctly for a long time. It has made my job much, much easier.

    Click here for Sec urity Event Classes [digital.com] that can be audited/alarmed. (It wouldn't hurt to read the whole Security auditing section. The ANALYZE/AUDIT tool is very nice.)

    It is also good to have hardware-level events logged. A couple of years ago I had a VAX crash. I simply did ANALYZE/ERROR/SINCE=TODAY and found out I had a SIMM that was having ECC/parity errors. Since it logged the bank of memory with the failure I knew exactly which SIMM. I called the DEC service guy, he came out, and we switched it out during lunch. No one noticed.

    Yes, Linux could use these things.

  • The Research has couple of flaws - some the fault of DH Brown others just due to bad timing.

    The Features are poorly tested and weighted badly - Its all very well saying something has a feature but if it is Shite then it is of little use.

    NT has many applications and a variety are included (when you pay big bucks) - unfortunately they are usually unstable due to the nature of win32 and the instability of the NT kernal.

    The research weighed in favour of what NT had and badly in what it hadn't and made it worse by not testing or benchmarking them.

    Linux should of came in at least equal or even ahead of NT - adding SP4 and the extras is no different to adding the 2.2 kernal (my experience leads me to believe the latter is easier and quicker).

    the other problem is bad timing - the research pre-empted many important events such as SAP and other major Enterprise Application vendors not only supporting Linux but supporting it with SMP and other Enterprise features.

    The research also pre-empted the 2.2 kernal release which is a shame.
  • Not *exactly* journaling, but one project is here [tuwien.ac.at].
  • ... in the "limitations" article here. [slashdot.org]

    The consensus was that the article was talking about "many" processors meaning more than two or four... does Linux run on 128 processors?

    In terms of "logging" people felt that this was referring to a journaling filesystem. Granted, I don't know what this means. :-)

    I'll admit, though, that the way it's written, it certainly does look a bit like FUD.

  • Linus also didn't plan on supporting non PC hardware or non x86 chip sets.

    There's nothing stopping something like this being a compile option or a patch set, etc. Though I don't know anyone who's run into this 'limitation' yet either.
  • Journaled Filesystem. Linux needs it to be competitive with the big dogs. If the feature were available it would remove a huge barrier to Linux's success. The other OSs mentioned all have this feature (HP-UX?).
  • I guess Beowulf and syslogd are just random collections of letters which don't mean anything.

    Well, why cant they be? After all, thats how MS puts out its code. An infinate amount of monkies pounding out source :)

    The rest of the article isnt even worth flaming its so wrong.

    ---------------------------------------
    The art of flying is throwing yourself at the ground...
    ... and missing.
  • Since we all know what classic FUD is, this report shows what diet FUD would look like. Does this mean that one of the two celeron processors on my box (which I painfully assembled after 3 days of soldiering) is not working SMP? Exectly what kernel did these people evaluate? Linux 1.0x?

    I'm really suprised a report like this could be posted on a non-biased (ahem) media like MSNBC. Or has the MS part of MSNBC finally eaten up all good reporting? This is seriously sick. Along the line of linux not supporting SCSI, Ethernet, and kitchen sink networking.


    --
  • If I'm not mistaken, evertime an article is reposted on a popular network, it is editor read and corrected before being posted. (If this is the case, why did MSNBC let this article into their servers?).


    --
  • Wow. We're being evaluated according to high-end
    criteria! This is *good* news. So we're not
    doing so well against supercomputers! Hooray!

    It sounds like we beat windows a long time ago.
  • He's been saying this for a while.
    Why doesn't he just put the working code out
    there in a cvs repository. What's all this
    "waiting for a preliminary release?"

    I'm scratching my head as I walk through this
    bazaar...
  • by jms ( 11418 ) on Tuesday April 06, 1999 @11:59AM (#1947371)
    Actually, mainframe CPUs themselves are not that much more overwhelmingly powerful then desktop CPUs. The mainframe benefits mainly from I/O channels, block mode devices (including terminals), and better scheduling algorithms.

    I/O channels are CPUs dedicated to supervising I/O operations, so that if you need to read in 100 blocks from disk, you build a list of the blocks along with their addresses, and start the channel. The transfer is done, and you receive an interrupt when it is complete ... as opposed to receiving an interrupt when each block transfer completes.

    This really pays off when you include block mode terminals, like 3270s. A 3270 contains a screen buffer. When you are working in a text editor like VM XEDIT, everything you type is stored in the terminal until you press return, or a function key. At that point, the terminal transmits a list of all the screen fields that have been changed. If you were running VI on a unix system, you would be peppering the computer with console interrupts with each keystroke. More if you are running X. This is how mainframes can efficiently support 1000+ online terminal users.

    Mainframe scheduling algorithms are specifically designed to separate the workload into interactive and non-interactive users. If the scheduler decids that you are an interactive user, you get small timeslices and more of them. When you start your big program, and it goes CPU bound, the scheduler notices that you are using your full timeslice, and quickly moves you into a different queue, so that, for instance, after a while, your program will receive a timeslice that is 16 times as long, but only receive the timeslice 1/16 as often as an interactive user. So your background process lurches along, but you don't notice because the instant it starts doing I/O to the terminal, it becomes an interactive process again.
    These sorts of tricks are what keep mainframes from appearing to be "bogged down" even when their resources are massively overcommitted.

    These algorithms have been fine tuned for about 30 years, and are specifically designed to best utilize block mode I/O devices, and large numbers of interactive users attached to boring 3270 terminals. It's a VERY different workload then you'd find on a Unix system, and the two workloads don't compare well.

    In fact, one of the biggest problems with mainframes is running TCPIP efficiently, because TCPIP *does* pepper the system with interrupts.

    - jms



  • NT can act as a router, but is not so by default. Setup is simlar to unix, with a ROUTE command. (Or MS has a no cost routing add-on with a GUI.)

    NT does broadcast all the time, which seems to be a characteristic of the NetBIOS-over-TCP/IP protocol. Is OS/2 or Samba any better in this regard? Need to find someone with a network monitor to check ...




    --
  • by raistlinne ( 13725 ) <`lansdoct' `at' `cs.alfred.edu'> on Tuesday April 06, 1999 @10:24AM (#1947380) Homepage
    It's true that Linux doesn't scale like Solaris on the Big Iron to 128+ processors. On the other hand, neither does NT, and NT was ranked in front of Linux. I know that VA research demoed an 8 CPU Xeon system at Linux Expo, and I've heard about someone running linux on a 12 CPU Sun system. Can NT even do that? Why on earth would NT be grouped with the Big Iron OSes like Solaris, Irix, Aix, etc?

    As for the journaling file system, I think that that's on its way, though I don't know for sure.
  • As far as journaling file systems, it's my understnading that that's on its way, but it definitely isn't here yet.

    As far as SMP systems, ask VA research how their 8-CPU Xeon system runs. Care to comment, Chris?

    As for terabyte files, try an Alpha, or any other 64-bit platform. I'm fairly sure that my Alpha could do terrabyte files, if I only had the hard drive for it... :-) I know that I don't have the year 2038 problem, nor do I have the 2(4?)Gig RAM limit on my Alpha.

    Now, can NT do 6-8 CPUs worth a damn? I'm fairly certain that NT can't do 64+ CPUs worth anything. And does it have a journaling file system worth mentioning? I've never really dealt with journaling file systems. Does anyone know (btw, worth a damn/worth mentioning means on the same sort of caliber as Solaris/Irix/Aix/etc. can do it)?

  • -----Original Message-----
    From: dhbrown.com
    Sent: Tuesday, April 06, 1999 1:02 PM
    To: tcooper
    Subject: Re: Feedback



    Tom,

    The report carefully defines its terms and conclusions,
    summarized briefly in a free summary on our website at
    www.dhbrown.com.

    The news report you cited is indeed somewhat inaccurate
    (it is a paraphrase rather than a quote). Linux
    does "run simultaneously on many processors" if
    many equals, say 4 to 14. Our report however does
    also look at systems that have proven performance in
    both production and industry-standard benchmarks up
    to 64-way SMP. Linux has not yet reached that level.
    In fact, we were mildly shocked that, despite reading
    various kernel lists and talking to various Linux
    vendors, there is currently no really good publicly
    verifyable benchmark evidence of Linux's scalability
    even on 2-way or 4-way workloads (although I would be
    surprised if we didn't see some in 6 months). It boots
    on a 14-way as David Miller demonstrates, but the
    scalability claims are still a little in the air.
    I wouldn't make a claim that Linux doesn't scale.
    I would make the claim that Linux advocates have yet
    to reasonably demonstrate that it does.

    "Keeping a log" is a similar over-simplification. I believe
    the reference was to "event management" facilities just
    now becoming available in UNIX where all the system
    logs can be accessed from a single console in a consistent
    manner. Managing various logs in UNIX has long been
    more painful than it needs to be.

    Thanks for your respectful feedback.

    Regards,
    Greg Weiss
    Research Analyst, Systems Software
    D.H. Brown Associates





    tcooper on 04/06/99 11:41:00 AM

    To: DHBA Systemsw/DHBA
    cc: tom_cooper@bigfoot.com
    Subject: Feedback




    According to http://www.msnbc.com/news/256197.asp Your study claims
    that
    "Linux currently lacks some of the features demanded by corporations
    that intend to run their entire business on computers. Among them are
    the ability to run simultaneously on many processors in a single
    computer and to keep a log of what the computer has done."

    What sources can you cite for this assertion? Linux is multi-processor
    scalable, and does provide logs that are at least as detailed as
    anything that you can retrieve from an NT box.

    Respectfully,
    Tom Cooper
  • Well, Linus has said that he doesn't plan on making Linux scale above 16 processors, because after that you're hurting people who only have one.
  • I would be reasonably comfortable in saying that slashdot is a biased source of information too
    Slashdot does not pretend to be a journalistic site that provides impartial reporting. The opinions you see on slashdot are very obviously those of individuals. MSNBC purports to provide unbiased information. That is the distinction. If a Time/Warner media outlet is biased towards Windows NT, we might be able to effect a change in that, but if an MS-owned media outlet is biased, there is little likelihood of change.
  • by The Infamous TommyD ( 21616 ) on Tuesday April 06, 1999 @11:28AM (#1947416)
    sheldon is right. I'm one of the few people who actually do research on audit logs (at CERIAS [purdue.edu], previously COAST [purdue.edu]).

    Since most people who administer commercial Unix boxen don't enable them, many people don't even realize that some systems have rather verbose extensive logging mechanisms. The best that is out there is the Sun Basic Security Module (BSM) audit facility. It'll generate lots of logs and sure it takes a fair bit of resources not to mention disk space, but it allows you to run fairly sophisticated host-based intrusion detection systems and very good post-mortems!

    Because of Linux's open nature, it would be very useful to have a verbose audit trail mechanism. This would allow security researchers like myself to base new systems on Linux more easily. (and yes, it would most likely end up being GPL'd!)

    A journaling file system would be super neato as well. From a security standpoint, one could get a much better idea of what an attacker did to the system files even without running tripwire.

    In closing, these are somewhat advanced features and there is no reason why they can not be added to Linux. I believe most of the commercial Unices had them added to an existing system as well. Well, Nuff said.

  • by Cowards Anonymous ( 24060 ) on Tuesday April 06, 1999 @10:08AM (#1947424) Homepage
    Was that Linux was being compared to Big Iron and found lacking. Linux logging is no worse (and often better) than most commercial Unices, but the only place I've seen absurd levels of multi-processor and system logging are in the "Real Computer" world.

    I think the writer's opinion seems somewhat biased (surprise, surprise) but he brings out some reasonable questions: just how far can a single Linux box scale, and for what tasks?

    Now we should get out there and improve all the little things that need improvement to help Linux, and *nixes in general, reach these entrenched Heavy Hardware markets.

    Complete aside: I believe the reason mainframes need massive CPU power has nothing to do with capacity and everything to do with the tremendous overhead of the monitoring, accounting, tracking, logging, and process/resource management features of most Real OSes (e.g. OS/360).
  • The following are from an email from Greg Weiss (grweiss@dhbrown.com) :

    The news report you cited is indeed somewhat inaccurate (it is a paraphrase rather than a quote). Linux does "run simultaneously on many processors" if many equals, say 4 to 14. (me: but they are also looking at up to 64 SMP, and linux hasn't reached that level)

    In fact, we were mildly shocked that, despite reading various kernel lists and talking to various Linux vendors, there is currently no really good publicly verifyable benchmark evidence of Linux's scalability even on 2-way or 4-way workloads (although I would be surprised if we didn't see some in 6 months)

    I wouldn't make a claim that Linux doesn't scale. I would make the claim that Linux advocates have yet to reasonably demonstrate that it does.

    "Keeping a log" is a similar over-simplification. I believe the reference was to "event management" facilities


    So, If you have Linux running on a SMP system, PLEASE mail Greg Weiss (grweiss@dhbrown.com) [mailto] and tell him!
  • [I've based this post on the PDF document someone, I've forgotten who, kindly posted on this board - not on the whole report. I don't have $995 to spare, funnily enough.]

    Considering the amount of hype Linux has received in the past few months, it is only to be expected that a report such as this would be forthcoming. I think it is important to bear in mind that while the report does not truly compare like for like, it is probably necessary that a report such as this _is_ produced before the general populace start expecting too much of the Linux community, and when they are disappointed, turn away from you never to return.

    The report does bring to light a number of reasons why I and other other sysadmins I know have generally steered clear of Linux in favour of *BSD and commercial UNIX systems - and, when the occasion has demanded it, Windows NT. I know this may seem like blasphemy to many readers, but corporate necessity wins over any prejudices or principles.

    Pricing: I don't think anyone will argue about this. Linux is, in fact, cheaper than all the others. ;)

    Scalability: Linux is not as scalable as operating systems such as Solaris and HP/UX because it was never designed to be so. It was originally designed for an x86 platform, and has only relatively recently emerged as a contender in the mid-range server market. Thus it is to be expected that it is perhaps not quite as scalable as its commercially-available counterparts. I doubt that anyone would seriously care to dispute this.

    Reliability, availability, serviceability: I believe the same holds to be true. Linux was originally designed as a home hacker's system, not as a mission-critical server platform. While great strides have been made in this area as Intel and other x86 systems have become bigger and better - and thus thrust the PC into the low-end server arena - there is still a long way to go. Beowulf (not investigated in the report) is to my knowledge the only Linux clustering solution currently available. SMP resource management is still rather limited.

    Here the report admits to a lack of hard evidence about system stability, at least in terms of mean time between system failures. To quote the report verbatim, "anecdotal evidence abounds." (I've heard of Linux systems whose uptime has exceeded a year, though in my rather limited experience with Linux I have yet to witness this. I've seen uptimes of about six months with FreeBSD, and a maximum of about three weeks under Windows NT. The HP9000 I'm logged into only has a current uptime of four days, but since it's a development machine that doesn't mean a lot.)

    Internet functions: the only gripe here is about availability of good commercial e-commerce applications for Linux - something of which the readership here is only too aware, I hope!

    Distributed Enterprise Services: likewise.

    System Management: The report has quite a few good things to say about linuxconf, and does list a number of shortcomings in the system management tools of alternative OSes, so I don't see a problem here.

    PC Client Support: Samba gets a mention (which is good), but again the gripe here is lack of good commercial software for Linux.

    So, in short, the report doesn't really slag off Linux all that badly. The general tone of the report, if you ask me, seems to be that Linux is getting there but for any serious large-scale server applications, it is an idea whose time has not yet come. And I for one am inclined to agree. It doesn't yet have the scalability and resilience of those operating systems designed for high-end servers (notwithstanding the operating system itself may be more stable -- but what good is a stable OS if your data is lost on the day when it _does_ crash?) and it doesn't yet have the commercial software to make a good system a great system.

    So, in short, I don't believe the report is saying that Linux is a bad system. It's also not telling us anything new. It _is_ a viable system, for the low-end server market; and it's a damn sight cheaper than anything else out there. The WSJ editorialized it to death. The report itself is quite reasonable.

    "If it's a bad idea, trash it. If it's a good idea, steal it and release the source code."

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...