WSJ Says Linux Lags 281
TroyD sent us a link to
a
WSJ Article on Linux
that Says Linux is Good, but that it lags behind its rivals.
Troy also sent a choice quote from the article:
"...Linux currently lacks some of the features
demanded by corporations. [...] Among them are the
ability to run
simultaneously on many processors in a single computer and
to keep a log of what the computer has done."
Cool. I can save a lot of diskspace: rm -rf /var/log.
The actual report is at (Score:1)
http://www.dhbrown.com/dhbrown/linux.html
D.H. Brown has ZERO credibility. This is just dreck written by salespeople.
Linux does not lag behind NT.. (Score:1)
Linux scales better than NT, at least for 2-4 processors (my limited experience). Of course there are commercial Unixes that scale to hundreds of processors, which linux doesn't do - yet - because it isn't a big issue for the vast majority of linux applications - yet. If you're going to run a 50000 user database server, you won't run Linux - but you definitely won't run NT either - the choice will be AIX or a Sun Enterprise server or something like that.
Info on jornaling filesystems (Score:1)
monkies at MS (Score:1)
Nah... An infinite number of monkies pounding on typewriters could turn out Shakespeare.
Microsoft? Sixteen monkies sharing a fountain pen and a case of beer.
Take away the pen and you've got ZDNet.
Replace the beer with Jolt and you've got Slashdot. ;)
-D
dcross@cryogen.com
Lee Gomes, Wall Street Journal... (Score:1)
I ran a search on him at the archive on wsj.com and came up with these headlines:
"Linux System Performs But Has Some Problems"
"Linux System Makes Waves But Has Limits"
"Linux Operating System Gets Big Boost From Support of H-P, Silicon Graphics"
I could not read any of them since you need a login (did try cypherpunk of course
Yea, and NT does all this. (Score:1)
The WSJ article claimed to be looking for systems suitable for "the worlds toughest jobs". It should have disqualified both Linux and NT. Since it did not, we can dismiss their assesment as flawed at best. To go on and suggest that NT was "better" than Linux by using a study that did not purport to study the stated assumption, suggests FUD.
From what I hear, the WSJ is a Microsoft first shop. If they can't beat a developer into using NT, then they'll settle for Sun. Using Linux is evil and career limiting. Soon after said career is limited, it will be ripped out and replaced.
Look at the underlying D.H. Brown study (Score:5)
While the study ranked NT at about the same level as Linux, it expressly did not consider stability. The study said that stability evidence was almost entirely anecdotal, and there were no real MTBF studies to review.
See, newspapers suck. (Score:1)
The version I read in the Globe and Mail [globeandmail.ca] was pretty terrible: it had a worrisome headline, ("Linux not quite up to snuff") six generally accurate and positive paragraphs, but its author obviously ran out of brains before column-inches, misquoting the study as has already been demonstrated.
I think this was a case of "balance": if something's good, we have to say something bad about it. Unfortunately, they said the wrong things, because they didn't understand what the study was telling them.
The Wall Street Journal ("WSJ") is a "journal" (diary) about Wall Street. It is very popular with people who trade stocks and invest -- indeed, it is more popular in some circles than the National Enquirer. Unfortunately, WSJ isn't printed in English, and doesn't offer home delivery.
Transliterating? (Score:1)
If the person were transliterating, we could try to sound it out and perhaps determine what was actually said. Unfortunately, what we've got here is an incompetent translation.
It's kind of like calling "Halt, stranger, or die!" a greeting. It's not entirely wrong to do so, but there's a great deal of important detail that's been ignored.
Mostly true, but why is NT in there? (Score:2)
It's ridiculous how anyone is willing to put their entire company on that OS...
NOT a paid promotional company. (Score:2)
Heh, it would be funny if no one bought the full study as well. All that work, and no $995 checks to make up for it.
Anyway, no one paid for the study beforehand, but they are in it for the money, obviously.
The trouble of non-technical technical articles (Score:1)
They mentioned logging capabilities, and to that I assume that they are referring to a journaling file system, or audit trails, both of which truly are missing on Linux. The trouble with the assertation was that it was too broad, and I believe this stems from an attempt to write about a technical subject to a non-technical audiance. The average WSJ reader doesn't know or care what an audit trail is, or what a journaling filesystem is and why you'd want one. The author's solution was to simplify to the point of PHB understandability. Unfortunately this led to an article that gave the impression that Linux has NO logging abilities, something that would petrify any reasonably responsible executive.
The SMP allegation is a bit trickier for me to understand. I would presume the author was referring to big-iron 64 and 128 processor type scalability, however this article also implies that it lacks even 2 or 4 way scalability, and implies that NT has greater scalability.
I'm not sure how best to respond, I wrote a paper letter to the WSJ since I'm a subscriber to the print edition outlining my concerns with the article and the possibility that there was some misunderstandings due to the technical nature of the article. Hopefully things like this will eventually disappera.
The most important thing is to keep focus. Write code. If you can't write code, test code. If you can't test code, write documentation. Just fine something, even the smalles thing, and do it. Anything we do to work on improving Linux and the related applications will solve many more problems than explaining to reporters the errors of their ways.
Audit trails (Score:1)
Mostly true, but why is NT in there? (Score:1)
My impression of this... (Score:2)
AFAIK Linux doesn't have a general system level statistics logging ala Performance CoPilot. This makes Linux a little more difficult to determine exactly where the bottleneck is on a loaded system. (Is my system slow because I'm out of System Time, or am I always waiting for disk I/O, or maybe the PCI bus is jammed up?)
As to your aside: most big systems spend a very small fraction of their time with all of the resource management tasks, and these tasks are necessary when you have to determine what to add to a system. It is extremely difficult to determine where the bottleneck is in a system when you don't have statistical information on every subsystem over a period of time, especially when the bottleneck isn't something obvious like memory or CPU time, but something more fundimental like contention on your SCSI busses or memory bandwidth issues.
Let's here it for NT (Chirp, chirp) No, really... (Score:1)
According to Netcraft:
www.msnbc.com is running Microsoft-IIS/4.0 on NT3 or Windows 95
Well there you have it. The silly website is running IIS v4.0!
NJV
Linux does not lag behind NT.. (Score:1)
I would certainly run a 50,000 user database server on Linux... I just wouldn't use the standard login apps. Depending on what you are doing, my Dell PowerEdge 2300 running v2.2.3 and MySQL could handle a db of that size easily. I wish I could say the same of my Dell PowerEdge 2300 running NT4...
Basically everything BeOS has, Linux doesn't. (Score:1)
Like a graphical interface? (X)
Nuff said
Score of only 2? Eh? (Score:1)
This letter goes directly to a source used in the article. This letter is directly on point. It clarifies one of the central questions about the MSNBC article: "Where did this information come from?"
And it gets a score of 2, just a hair above some AC posts?
-----------------------------
Computers are useless. They can only give answers.
refutation (Score:2)
Has anyone refuted them? Is there any way to publicly show them to be in error?
This falls into the disinformation category. This is about the most uninformed article yet.
OK, here's the gut reaction from /. (Score:2)
"This article is automatically bad just because it's posted on MSNBC."
C'mon folks. We have to do better than that. I don't know the details on what Linux is and is not capable of, so I need some assistance sorting out the FUD from the valid criticism. What is Linux capable of as far as SMP and logging? What are Linus, AC, et al working on in those areas?
Re: My impression of this... (Score:2)
UltraLinux is known to run on a 16-processor UltraSPARC box (an Enterprise 450 IIRC) and that same version of our beloved OS will theoretically run on a 64-processor E10K, though AFAIK nobody has actually done so - they're rather hard to come by. Such a machine certainly constitutes Big Iron, though, and I'd love to see the results, whatever they show, of Linux vs. Solaris on such a box, or Linux-E10K vs almost any mainframe solution.
Still, since all hardware supported by NT is also supported by Linux, to rank NT ahead is ridiculous. It would be nice to know exactly how DH Brown arrived at its rankings for "general business computing." Size of monetary contributions to the DH Brown slush fund? Cost of the OS? Survey of PHB's? Engineers? Users? Or did they run real, objective tests in the field or in a lab? And even if the latter is true, what tests were run and how? The article does not even attempt to inform us what the DH Brown report was attempting to assess, much less disclose any real information. There are cases where Linux may lose, especially on Intel; that's fine. But any report that ranks NT ahead of Linux has a pretty bad odor about it, regardless of whether we are considering Big Iron.
Well, it's true. (Score:1)
MSNBC are surprisingly good; it isn't them. (Score:1)
This article isn't theirs; it's a reprint from the Wall Street Journal, who are in turn quoting a report by a "consulting group" that no-one's ever heard of and that clearly don't know a damn thing.
--
what the heck? (Score:1)
Please Clarify, Mr. Iams (Score:1)
According to the SMP FAQ, Linux's SMP support has been tested on 4 processor systems and theoretically should support up to 16 processors. While this is a reasonable number, it probably falls short of being "many."
Score of only 2? Eh? (Score:1)
The report - and a question (Score:1)
First, the premise of the report is that Linux is not enterprise-scale material, right? Okay, the intro paragraphs from the summary end with this conclusion - after mentioning the oses.
And I quote:
They(the two Linux distros) both fall short of the coventional production-grade implementations of proven, non-trivial SMP scalability; high-availability clustering capabilities; journaling file systems; logical volume managers; large files; and many other less significant but useful functions. end quote.
Uh, and NT Server 4.0 Enterprise Ed. has all this? Can someone tell me if this is true, yes/no?
Another question. This summary doesn't seem to make any real judgements about the oses in terms of how well they work, what you need to get all these wonderful goodies in terms of hardware resources, training, administration. I find this rather a large oversight. I mean, MS might be able to offer everything, but how well does it hold up, and what kind of resources do you have to throw at it to get to this scale; what about TCO, ROI, etc.? I agree that enterprise scale needs big iron, and Linux ain't there yet, but I also believe that nt ain't there neither. I think I have a problem with this approach. Also, the clueless journalists who "summarized" the report, or summarized the summary most likely, really should have kept their fingers off the keyboards and on their dicks and just provided a link to the free pdf file.
consulting group... (Score:1)
A report by D.H. Brown Associates is used as a source for John Kirch's NT Server 4.0 vs. Unix [unix-vs-nt.org] page. They don't seem to be too biased. On the other hand, I couldn't figure out how Linux came in under NT. Even if they were talking about journaling, NT doesn't do that either, and Linux DOES do SMP. They either got their facts screwed up, or they had some reason that they didn't present in the article.
My impression of this... (Score:1)
As near as I can tell from the executive summary, Linux lost to NT mainly because there wasn't enough information available about its performance. The requisite studies and benchmarks didn't exist for Linux, so it lost in those categories by default. The executive summary claimed that NT has a journaling FS. Is this even true? The only other thing that NT seemed to win out on is clustering for web and database serving.
They apparently went strictly on functionality rather than price too. While they do tell you that Linux is can be used to build a Beowulf supercomputer, they say that NT can do basically the same thing using public domain software packages. What they forgot to tell you is that for the price of all the NT licenses you'll need for that, you could buy a real supercomputer.
I think the report tried to be objective, but I also think they cut NT a lot more slack than they did Linux.
they did use 2.2.x (Score:1)
They used OpenLinux 2.2 which uses the 2.2.5 kernel for the review, as well as RedHat 5.2 with the 2.0.36 kernel.
Read the report before making judgements. (Score:1)
I believe the report actually said that Linux didn't do SMP well, and that it didn't have a journaling FS. Which, oddly enough, it said that NT did have.
Hmmm... (Score:1)
Didn't look at the date, but the report does claim that they used Open Linux 2.2 with the 2.2.5 kernel. Perhaps they are confused?
Linux Does lag behind NT (Score:1)
Misfit
Only one comment... (Score:1)
AIX wins.. (Score:1)
and everybody knows *EXACTLY* what was meant by
it.
They *MADE SURE* everybody knew what they meant
by it.
Read the next few lines of the WSJ article where
they tried to explain their comments...
And do you have a Username/Password? (Score:1)
Shit for brains, I read the Executive Summary. (Score:1)
And it tells just about zero regarding their methodologies.
Yeah, there's more to a study than the results. You must manage something. Pity.
Linux does not compete. (Score:1)
However, it does not come close to offerings from Sun or IBM for SMP scalability, security, and high-availability [i.e. fail-over clusters].
Wanna fix this? Join the Linux-High Availability project. Write a journaled filesystem. etc. Because for NOW, this isn't FUD, there are the hard facts.
the STUDY did not solely compare NT (Score:2)
Great! (Score:2)
Of course, this is for Linux 2.3... and it will be a long... long.. time before it is "produciton quality" and released in Linux 2.4/3.0 (12-18 months?) So my observations [about Linux not competing with high-end unicies] were very valid for the time being.
Well, it's true. (Score:4)
- You can't get a government B1 security rating on Linux (You can on "Trusted Solaris" or on AIX)
- Inclusive with the above, we need a journaled filesystem
- You can't get highly-available failover clusters with Linux. [though the linux-HA project is working on it]
- You can't get single-system-image clusters for scalability with Linux (Beowulf uses a low-level messaging API that essentially ties your app to Linux)
- You can't have terabyte files for large databases [that means no data warehouses]
- You can't have > 100,000 users on a Linux box for very large networks [Solaris & AIX can]
etc.
By now you're all probably hopping mad at me, but please folks: take a deep breath. Is this really FUD? Or is it merely pointing out some small nitpicks? My, my... people are so quick to criticize and yet so hyper-sensitive to their own medicine.
Let's get real: we're only talking some minimal feature-lack, and not very "widely used" features at that. Wanna fix it? Contribute code. This is how Linux makes FUD irrelevant - not through whining about the WSJ's misleading prose.
The underlying study by D.H. Brown is rooted in fact, and it means one thing: we now have specific target areas that Linux "could" be improved, provided someone with the time+need will contribute.
``Keep a log of what computer has done'' (Score:1)
It's obvious that the article is the work of a technological nincompoop who is transliterating information obtained from someone else.
Audit trails - Questions (Score:1)
To enable it, enter User Manager (no - really - stupid eh?), and change the Policies->Audit options.
My take on this. (Score:3)
There are issues with Linux, like shipping security out of the US that commercial OS's can get around with licences from the govt. That's a big problem for Linux - you can't just download, or buy for 2 bucks, a SSL enabled Web and News server. You can't even get it for $3000. Not that it's hard to setup mind - I've done it myself - configuring Apache for SSLeay was quite easy, but that's not what DH Brown measures - and it's not something that can easily be measured (unfortunately, for the free s/w croud).
There are some serious shortcomings in the report though. Such as looking at 2.0, not 2.2 (hence why Linux appears to fall down in comparisons of SMP, large file support, Max memory support - 2GB in 2.0). The section on SMP testing simply has a big blank space for the performance of Linux - which makes it look like it comes last at first sight. I wouldn't mind betting it's better than NT with a 2.2 kernel.
Unfortunately there's also the issue that the report just discusses the features that go into NT (and the others) that provide high reliability, such as HA clustering (which is shite on NT), resource management (also shite on NT) etc. They don't actually take into account how stable the system is in every day use. If they did, NT would come last.
I'm not sure about how Linux is worse off than commercial Unixes vis-a-vis internet services. Can someone clear up what AIX and Tru64 offer over Linux in terms of IP protocols/tools, TCP/IP extensions, bundled web browsers/servers, bundled email servers and e-commerce tools. Perhaps it's just that very last one, which comes down to issues about SSL again... Sigh.
Other than those things, I'm not quite sure how they have Linux so far down the scale. I'm inclined to believe they just got it plain wrong. What am I missing?
Matt.
Only one comment... (Score:1)
Clarification from D.H. Brown (Score:1)
Gumpy,
Unless Greg had agreed to let you publish his email address, it was most irresponsible of you to publish it in a public forum. Now his Inbox could get flooded with flames.
The correct way would have been to ask people to email you, and then you should have forwarded the relevant emails.
If, of course, Greg, had indeed asked you to publish his email address, please accept my apologies in advance.
Audit trails - Questions (Score:1)
Kashani
Taking quotes out of context... (Score:1)
is bad
Although the NT thing is still a mystery (didn't they take into account memory useage or stability at all?)
Linus Torvalds ... "anxious"?? (Score:1)
Here's how you can tell the guy is lying. Has anyone ever seen Linus Torvalds anxious? And Linus's command of the English ideom exceeds that of most native speakers, so he would never say "anxious" when he meant "excited". No way did he talk to Linus. He's just making it up as he goes along.
-russ
Well, it's true. (Score:1)
- You can't run a 64-processor SMP box on NT.
- You can't get a government B1 security rating on NT.
- You can't get high-availability failover on NT (although they're working on it)
- You can't get single-system-image clusters on NT. Heck, you can't get *any* clusters on NT.
- You can't have > 100,000 users on an NT box for very large networks.
Had they said, "Linux is good, but is still lacking features and lags behind Solaris and Tru64 Unix", I'm sure we all would have nodded and agreed. But NT??
Personally, I'm getting sick and tired of hearing people talk about Linux as inappropriate for an enterprise, and then talking about NT as an "enterprise-level OS". Sure, I'm all for criticizing Linux where it falls short. But let's have a little objectivity, OK?
(Note to Stu: No, I'm not flaming you. You're right, of course. But this "NT" thing really has me burned.)
Ouch. This is disinformation. (Score:2)
--
Journaled Filesystems (Score:2)
For instance, if an NT(well or a standard Unix) partition of say 500 Gigs or larger were to go down hard. When the machine came back up, to run a full fsck on it would take several hours... You'd be down for the better part of a day or two.
But with Veritas, it can recover quite quickly without having to do a full integrity check on the filesystem, and your back up and running in only a handful of minutes.
As far as your other comment...
NTFS also allows you to extend a logical partition with additional space from another physical partition.
Mostly true, but why is NT in there? (Score:2)
When the article discusses SMP, they're talking about larger scale than say 4 processors. More like 32 or so. Obviously IBM leads the way with this, with Sun close behind.
Audit trails (Score:3)
Who opened a file, who ran a program, who wrote to a file, when this occured, etc.
This is a pretty critical piece for many businesses in terms of security policy, etc.
Having a journaling filesystem available such as Veritas is also important, which is what some others assumed was being talked about.
I'm rather amazed at the number of comments calling this FUD when nobody seems to be quite sure what is being talked about. Of course this lack of understanding appears to have been encouraged by the initial poster.
The WSJ article was partially accurate (Score:2)
Linux does run on 2 or 4 processors. That's basically the limit based on presently available motherboard technology. What Linux doesn't do is scale like say AIX or IRIX does, where you can run on dozens, hundreds or thousands of processors. I think this will be partially addressed in the coming months as the big iron vendors start adopting Linux as part of their road map. They will have to modify the kernel to work on a switch based architecture rather than a bus based architecture, which will take some time.
Again this is partially true. The contents of
I think if the Linux community decides they want to address these concerns then it will take much less time. Linux moves fast on things the community wants.
I would have to think this is true, though it didn't have anything to do with the scalability or logging concerns. NT has application support which at the moment Linux lacks. I think the ranking conclusion was misleadingly tied in with the above statements. It may or may not have been done on purpose.
Anyway, somebody should post a response (a well written response, not "you're all morons and will be the first up against the wall when the revolution comes!") that does a fair disection of the article.
Please Clarify, Mr. Iams (Score:1)
Just use 'fight training rules'. When one person encounters enemies, you count like this: one, two, lots, many, run away!
Therefore, many=four.
(ok.. so it's silly, but this discussion needed some humor).
See, newspapers suck. Journalism sucks in general! (Score:1)
I wouldn't trust anybody who has spent the last 5 years with Windows, tried to install Linux once, failed at the first attempt and now tries to tell everyone his truth.
I remember having quite a hard time when I installed NT for the first time (years ago). "Easy", "user friendly", "intuitive", they are all subjective impressions.
FUD with FUD (Score:1)
Did they really? (Score:1)
Wow.
They're wasting their time, they should be in fortunetelling or stocks...
Fair 'nuff (Score:1)
b) Linux runs beautifully SMP (2.1.x and above). I worked on a dual system in school last year, and my friend has one. It's very clean. And it's amusing to watch linux switch tasks back and forth between processors when it's not doing anything, looking for better load balancing. Alan Cox says Linux runs equally beautifully on up to 16 or 32 processors, and I trust him.
c) Linux does _not_ have a journalling FS that tracks everything that happens on the disk, guaranteeing your recovery of lost data, say after a hardware crash. NT doesn't have one either. Some big commercial UNICES do (AIX, what else?). I beleive Linus said it's a priority for linux 2.3.x, so maybe _maybe_ initial implementation by the end of this year.
Fight fire with fire? (Score:1)
/proc (Score:1)
It does of course depend on the driver a bit. But when a win95, nt or novell box gives you hardware suspicions, a linux rescue disk can make your life much easier!
Audit trails - Questions (Score:1)
A year or two ago I wanted to monitor access to a file on MP-RAS (NCR Unix). I needed to know who was reading the file and when. I was told by everyone I asked that it couldn't be done on Unix but it would be easy on a mainframe. (That advice didn't help much since I wasn't on a mainframe)
So, if I wanted to put surveillance on any old file on my Linux box to get a log of who read that file, could I do it? If someone just reads a file without modifying it (using more, for example) does that get logged somewhere?
A journaling filesystem (Score:2)
Journaling is frequently provided by high-end database programs, which grab an entire partition at a time and write to it however they see fit.
Providing it at the filesystem level helps ensure reliability of all data, not just data stored in high-end databases.
I recall someone was working on a journaling filesystem for Linux - a French graduate student, if memory serves - but I can't find the webpage anymore. Last I checked was a few months ago and it was still pre-beta.
Jamie McCarthy
So what? (Score:1)
what about 2.2's process accounting? (Score:1)
Of course, he was not using kernel 2.2. If he wasn't going to judge linux based on the latest kernels, why review it at all? No one uses linux "out of the box" if their needs would be better met by compiling a custom latest kernel. It's a whole different mentality from the commercial world, and it didn't seem like he understands this.
That said, who in the fuck would pay $1000 for this nonsense? If they weren't willing to even upgrade kernels, one wonders how much work they went into testing ... it smacks of "we're too lazy to do any testing, so we'll compile a bunch of info you already know, and charge you $1000 for it". What a joke. I need to start my own fudly Gartner Group clone.
Good lord... (Score:1)
Christopher A. Bohn
Read the report before making judgements. (Score:1)
DH Brown (Score:1)
HERE IS WHERE YOU CAN READ THE REPORT, YOU BOZOS!! (Score:1)
Now, instead of making the FUD statements, etc. why don't you take a look for yourself. Several things, from waht I can tell, were taken way out of context in the WSJ article. Go take a look and then talk.
Read the report before making judgements. (Score:1)
NOT a paid promotional company. (Score:1)
Don't start slandering a company you know nothing about. It can get you in trouble. Why don't you do a search for a report the author did comparing AS/400 to NT a couple of years back and you will see there is no funding done by M$.
NOT a paid promotional company. (Score:1)
A reply to D.H. Brown Associates (Score:1)
A reply to D.H. Brown Associates (Score:1)
selling my "expertise" to industry vendors. I keep my identity private for
a reason).
Actually, I *DID* read the report, hot shot. And I feel given their
methodology, the report was fair and valid. No, they didn't do in-house
testing. Why should they? What is to be gained by that? (They also probably
cannot afford to buy an AlphaServer, Sun Enterprise server, SGI Origin box,
and an RS/6000, but that's another issue...) PC Week and Info World labs do
it for everyone to read. I don't believe you will find any kind of "score
card summary" from anyone else. And considering the work Mr. Iams has done
in the past (which I doubt you have any knowledge of) I consider this to be
an excellent feature-for-feature comparison. You can argue till your blue
in the face about whether or not you agree. But I simply believe that this
is a credible study and report. What it appears to me is that, like many
others on here, you can't accept the fact that Linux isn't the
be-all-end-all fo OSes. I certainly do not discount it's power. But let's
face the facts that it still does NOT compare to AIX and many other Unices
that have been developed over the past 30-whatever years. I certainly
expect it to be an "enterprise-class" OS in the forseable future, but you
sure as hell don't see anyone running a Linux box to handle, say, an HMO's
data center. Do you? I'd call that "enterprise class."
Now, you criticize DHBA for using industry standard (read: industry
*recognized* benchmarks that have been developed over a long period of
time...) benchmarks to gauge performance. Well, what do you expect them to
do? Spend money and resources they most likely do not have to come up with
their won performance benchmarks? I'd like to see _you_ do that. Even if
they did, you'd jump on them because they were using something that no one
else in the industry recognizes as a standard! (FYI, it usually takes about
3 years of intense debate and testing by a consortium of vendors and
programmers to certify a benchmark such a SPECweb96).
Also, I don't see why you think this firm is putting down Linux and trying
to defend some sort of status quo. The study simply points out what the OS
currently lacks and where development efforts need to be focused. They
point out the plusses AND minuses of EACH OS. As someone else stated, this
study can serve as a form of roadmap for Linux.
Anyway, I am done. Enough time wasted on this topic...
What about NT? (Score:2)
Solaris is designed to run on up to 64 processors (an E10000 Starfire), each of up to 400MHz. It's only really IRIX and Cray stuff that goes over 64 processors, although you could argue that Beowulf can go that high, but that's clustering, not SMP.
Journaling would be good, as it would dramatically improve the reliability of linux under system crashes (rare under linux) or other hardware failures (isn't a lot you can do if the power supply dies). Bear in mind, however, that Solaris has only just put this in by default; before you had to pay for things like Solstice disk suite to get a journaling filesystem, so linux doesn't lag that far behind.
--
It's FUD because it's written so ambiguously (Score:1)
I think 'logging' is marketese for 'Auditing' (Score:1)
In which case, the complaint would be justified.
Solaris has auditing and ACL's for 'Trusted Solaris', though I have not (yet) used it. This is how one obtains B1 security for Solaris.
VMS has done auditing correctly for a long time. It has made my job much, much easier.
Click here for Sec urity Event Classes [digital.com] that can be audited/alarmed. (It wouldn't hurt to read the whole Security auditing section. The ANALYZE/AUDIT tool is very nice.)
It is also good to have hardware-level events logged. A couple of years ago I had a VAX crash. I simply did ANALYZE/ERROR/SINCE=TODAY and found out I had a SIMM that was having ECC/parity errors. Since it logged the bank of memory with the failure I knew exactly which SIMM. I called the DEC service guy, he came out, and we switched it out during lunch. No one noticed.
Yes, Linux could use these things.
Feature Dearth & bad timing (Score:1)
The Features are poorly tested and weighted badly - Its all very well saying something has a feature but if it is Shite then it is of little use.
NT has many applications and a variety are included (when you pay big bucks) - unfortunately they are usually unstable due to the nature of win32 and the instability of the NT kernal.
The research weighed in favour of what NT had and badly in what it hadn't and made it worse by not testing or benchmarking them.
Linux should of came in at least equal or even ahead of NT - adding SP4 and the extras is no different to adding the 2.2 kernal (my experience leads me to believe the latter is easier and quicker).
the other problem is bad timing - the research pre-empted many important events such as SAP and other major Enterprise Application vendors not only supporting Linux but supporting it with SMP and other Enterprise features.
The research also pre-empted the 2.2 kernal release which is a shame.
One project here. (Score:1)
This was discussed yesterday... (Score:2)
The consensus was that the article was talking about "many" processors meaning more than two or four... does Linux run on 128 processors?
In terms of "logging" people felt that this was referring to a journaling filesystem. Granted, I don't know what this means. :-)
I'll admit, though, that the way it's written, it certainly does look a bit like FUD.
My impression of this... (Score:1)
There's nothing stopping something like this being a compile option or a patch set, etc. Though I don't know anyone who's run into this 'limitation' yet either.
Two words! (Score:1)
monkies at MS (Score:1)
Well, why cant they be? After all, thats how MS puts out its code. An infinate amount of monkies pounding out source
The rest of the article isnt even worth flaming its so wrong.
---------------------------------------
The art of flying is throwing yourself at the ground...
Diet FUD (Score:1)
I'm really suprised a report like this could be posted on a non-biased (ahem) media like MSNBC. Or has the MS part of MSNBC finally eaten up all good reporting? This is seriously sick. Along the line of linux not supporting SCSI, Ethernet, and kitchen sink networking.
--
Diet FUD -- WSJ/MSNBC (Score:1)
--
WOW, being compared in the High End Realm!!! (Score:1)
criteria! This is *good* news. So we're not
doing so well against supercomputers! Hooray!
It sounds like we beat windows a long time ago.
Wanna play with FJS? (Score:1)
Why doesn't he just put the working code out
there in a cvs repository. What's all this
"waiting for a preliminary release?"
I'm scratching my head as I walk through this
bazaar...
My impression of this... (Score:4)
I/O channels are CPUs dedicated to supervising I/O operations, so that if you need to read in 100 blocks from disk, you build a list of the blocks along with their addresses, and start the channel. The transfer is done, and you receive an interrupt when it is complete
This really pays off when you include block mode terminals, like 3270s. A 3270 contains a screen buffer. When you are working in a text editor like VM XEDIT, everything you type is stored in the terminal until you press return, or a function key. At that point, the terminal transmits a list of all the screen fields that have been changed. If you were running VI on a unix system, you would be peppering the computer with console interrupts with each keystroke. More if you are running X. This is how mainframes can efficiently support 1000+ online terminal users.
Mainframe scheduling algorithms are specifically designed to separate the workload into interactive and non-interactive users. If the scheduler decids that you are an interactive user, you get small timeslices and more of them. When you start your big program, and it goes CPU bound, the scheduler notices that you are using your full timeslice, and quickly moves you into a different queue, so that, for instance, after a while, your program will receive a timeslice that is 16 times as long, but only receive the timeslice 1/16 as often as an interactive user. So your background process lurches along, but you don't notice because the instant it starts doing I/O to the terminal, it becomes an interactive process again.
These sorts of tricks are what keep mainframes from appearing to be "bogged down" even when their resources are massively overcommitted.
These algorithms have been fine tuned for about 30 years, and are specifically designed to best utilize block mode I/O devices, and large numbers of interactive users attached to boring 3270 terminals. It's a VERY different workload then you'd find on a Unix system, and the two workloads don't compare well.
In fact, one of the biggest problems with mainframes is running TCPIP efficiently, because TCPIP *does* pepper the system with interrupts.
- jms
NT Router? (Score:2)
NT can act as a router, but is not so by default. Setup is simlar to unix, with a ROUTE command. (Or MS has a no cost routing add-on with a GUI.)
NT does broadcast all the time, which seems to be a characteristic of the NetBIOS-over-TCP/IP protocol. Is OS/2 or Samba any better in this regard? Need to find someone with a network monitor to check
--
What about NT? (Score:3)
As for the journaling file system, I think that that's on its way, though I don't know for sure.
Mostly true, but why is NT in there? (Score:3)
As far as SMP systems, ask VA research how their 8-CPU Xeon system runs. Care to comment, Chris?
As for terabyte files, try an Alpha, or any other 64-bit platform. I'm fairly sure that my Alpha could do terrabyte files, if I only had the hard drive for it...
Now, can NT do 6-8 CPUs worth a damn? I'm fairly certain that NT can't do 64+ CPUs worth anything. And does it have a journaling file system worth mentioning? I've never really dealt with journaling file systems. Does anyone know (btw, worth a damn/worth mentioning means on the same sort of caliber as Solaris/Irix/Aix/etc. can do it)?
I wrote to DH Brown, and here's their reply (Score:2)
-----Original Message-----
From: dhbrown.com
Sent: Tuesday, April 06, 1999 1:02 PM
To: tcooper
Subject: Re: Feedback
Tom,
The report carefully defines its terms and conclusions,
summarized briefly in a free summary on our website at
www.dhbrown.com.
The news report you cited is indeed somewhat inaccurate
(it is a paraphrase rather than a quote). Linux
does "run simultaneously on many processors" if
many equals, say 4 to 14. Our report however does
also look at systems that have proven performance in
both production and industry-standard benchmarks up
to 64-way SMP. Linux has not yet reached that level.
In fact, we were mildly shocked that, despite reading
various kernel lists and talking to various Linux
vendors, there is currently no really good publicly
verifyable benchmark evidence of Linux's scalability
even on 2-way or 4-way workloads (although I would be
surprised if we didn't see some in 6 months). It boots
on a 14-way as David Miller demonstrates, but the
scalability claims are still a little in the air.
I wouldn't make a claim that Linux doesn't scale.
I would make the claim that Linux advocates have yet
to reasonably demonstrate that it does.
"Keeping a log" is a similar over-simplification. I believe
the reference was to "event management" facilities just
now becoming available in UNIX where all the system
logs can be accessed from a single console in a consistent
manner. Managing various logs in UNIX has long been
more painful than it needs to be.
Thanks for your respectful feedback.
Regards,
Greg Weiss
Research Analyst, Systems Software
D.H. Brown Associates
tcooper on 04/06/99 11:41:00 AM
To: DHBA Systemsw/DHBA
cc: tom_cooper@bigfoot.com
Subject: Feedback
According to http://www.msnbc.com/news/256197.asp Your study claims
that
"Linux currently lacks some of the features demanded by corporations
that intend to run their entire business on computers. Among them are
the ability to run simultaneously on many processors in a single
computer and to keep a log of what the computer has done."
What sources can you cite for this assertion? Linux is multi-processor
scalable, and does provide logs that are at least as detailed as
anything that you can retrieve from an NT box.
Respectfully,
Tom Cooper
My impression of this... (Score:2)
MSNBC, Your One Source for Biased Communications. (Score:3)
Slashdot does not pretend to be a journalistic site that provides impartial reporting. The opinions you see on slashdot are very obviously those of individuals. MSNBC purports to provide unbiased information. That is the distinction. If a Time/Warner media outlet is biased towards Windows NT, we might be able to effect a change in that, but if an MS-owned media outlet is biased, there is little likelihood of change.
Audit trails (Score:3)
Since most people who administer commercial Unix boxen don't enable them, many people don't even realize that some systems have rather verbose extensive logging mechanisms. The best that is out there is the Sun Basic Security Module (BSM) audit facility. It'll generate lots of logs and sure it takes a fair bit of resources not to mention disk space, but it allows you to run fairly sophisticated host-based intrusion detection systems and very good post-mortems!
Because of Linux's open nature, it would be very useful to have a verbose audit trail mechanism. This would allow security researchers like myself to base new systems on Linux more easily. (and yes, it would most likely end up being GPL'd!)
A journaling file system would be super neato as well. From a security standpoint, one could get a much better idea of what an attacker did to the system files even without running tripwire.
In closing, these are somewhat advanced features and there is no reason why they can not be added to Linux. I believe most of the commercial Unices had them added to an existing system as well. Well, Nuff said.
My impression of this... (Score:4)
I think the writer's opinion seems somewhat biased (surprise, surprise) but he brings out some reasonable questions: just how far can a single Linux box scale, and for what tasks?
Now we should get out there and improve all the little things that need improvement to help Linux, and *nixes in general, reach these entrenched Heavy Hardware markets.
Complete aside: I believe the reason mainframes need massive CPU power has nothing to do with capacity and everything to do with the tremendous overhead of the monitoring, accounting, tracking, logging, and process/resource management features of most Real OSes (e.g. OS/360).
Clarification from D.H. Brown (Score:2)
The news report you cited is indeed somewhat inaccurate (it is a paraphrase rather than a quote). Linux does "run simultaneously on many processors" if many equals, say 4 to 14. (me: but they are also looking at up to 64 SMP, and linux hasn't reached that level)
In fact, we were mildly shocked that, despite reading various kernel lists and talking to various Linux vendors, there is currently no really good publicly verifyable benchmark evidence of Linux's scalability even on 2-way or 4-way workloads (although I would be surprised if we didn't see some in 6 months)
I wouldn't make a claim that Linux doesn't scale. I would make the claim that Linux advocates have yet to reasonably demonstrate that it does.
"Keeping a log" is a similar over-simplification. I believe the reference was to "event management" facilities
So, If you have Linux running on a SMP system, PLEASE mail Greg Weiss (grweiss@dhbrown.com) [mailto] and tell him!
The report is only to be expected (Score:2)
Considering the amount of hype Linux has received in the past few months, it is only to be expected that a report such as this would be forthcoming. I think it is important to bear in mind that while the report does not truly compare like for like, it is probably necessary that a report such as this _is_ produced before the general populace start expecting too much of the Linux community, and when they are disappointed, turn away from you never to return.
The report does bring to light a number of reasons why I and other other sysadmins I know have generally steered clear of Linux in favour of *BSD and commercial UNIX systems - and, when the occasion has demanded it, Windows NT. I know this may seem like blasphemy to many readers, but corporate necessity wins over any prejudices or principles.
Pricing: I don't think anyone will argue about this. Linux is, in fact, cheaper than all the others.
Scalability: Linux is not as scalable as operating systems such as Solaris and HP/UX because it was never designed to be so. It was originally designed for an x86 platform, and has only relatively recently emerged as a contender in the mid-range server market. Thus it is to be expected that it is perhaps not quite as scalable as its commercially-available counterparts. I doubt that anyone would seriously care to dispute this.
Reliability, availability, serviceability: I believe the same holds to be true. Linux was originally designed as a home hacker's system, not as a mission-critical server platform. While great strides have been made in this area as Intel and other x86 systems have become bigger and better - and thus thrust the PC into the low-end server arena - there is still a long way to go. Beowulf (not investigated in the report) is to my knowledge the only Linux clustering solution currently available. SMP resource management is still rather limited.
Here the report admits to a lack of hard evidence about system stability, at least in terms of mean time between system failures. To quote the report verbatim, "anecdotal evidence abounds." (I've heard of Linux systems whose uptime has exceeded a year, though in my rather limited experience with Linux I have yet to witness this. I've seen uptimes of about six months with FreeBSD, and a maximum of about three weeks under Windows NT. The HP9000 I'm logged into only has a current uptime of four days, but since it's a development machine that doesn't mean a lot.)
Internet functions: the only gripe here is about availability of good commercial e-commerce applications for Linux - something of which the readership here is only too aware, I hope!
Distributed Enterprise Services: likewise.
System Management: The report has quite a few good things to say about linuxconf, and does list a number of shortcomings in the system management tools of alternative OSes, so I don't see a problem here.
PC Client Support: Samba gets a mention (which is good), but again the gripe here is lack of good commercial software for Linux.
So, in short, the report doesn't really slag off Linux all that badly. The general tone of the report, if you ask me, seems to be that Linux is getting there but for any serious large-scale server applications, it is an idea whose time has not yet come. And I for one am inclined to agree. It doesn't yet have the scalability and resilience of those operating systems designed for high-end servers (notwithstanding the operating system itself may be more stable -- but what good is a stable OS if your data is lost on the day when it _does_ crash?) and it doesn't yet have the commercial software to make a good system a great system.
So, in short, I don't believe the report is saying that Linux is a bad system. It's also not telling us anything new. It _is_ a viable system, for the low-end server market; and it's a damn sight cheaper than anything else out there. The WSJ editorialized it to death. The report itself is quite reasonable.
"If it's a bad idea, trash it. If it's a good idea, steal it and release the source code."