

D.H. Brown Associates Attacks Linux 166
Scott Stevenson
writes "A News.com article describes a study which
dings Linux for poor SMP support, access to only 1 or
2GB of memory, and lack of clustering. Of course, it
doesn't say anything about NT's uptime issues on a
per-machine basis." Also says that hard data
isn't available on Linux being reasonably crash proof.
2 GB of memory? (Score:1)
So Linux can only address 2 gigabytes of memory? I haven't seen the source code for the memory management but I don't see any reason why Linux shouldn't be able to address 4 gigabytes of RAM, just like WinNT can, allegedly. As for the 4 gigabyte limit, that's NOT Linux's fault. 32-bit Intel processors can only directly address 32 bits (4 gigabytes) of RAM. Of course, they can address up to 64 terabytes of virtual memory.
Therefore, who knows what memory wonders Linux could be capable of on other processors?
Apple is a bunch of Liars (Score:1)
As for using Sun servers -- Apple has no truly high availibilty solutions which is crucial for a website getting as many hits as they do. Furthermore, they have what amounts to years of work behind their Sun solution -- it isn't trivial to ditch an existing solution and start using a new one. And what web server are they going to use for it??? Apple may have a strong base OS, but they don't have a corresponding web server. Apache for MacOS X hasn't had the time tests that is necessary before deployment.
Remember when Microsoft jumped from using BSD for their web servers to NT overnight? It was a nightmare for them because of the amount of bad publicity while they worked out the details of the system. (Tuning, handling a distributed environment with all new scripts, etc.) I'd fear for Apple if they pulled a lame move like that...
UltraPenguin supports 2TB memory and 64 processors (Score:1)
SMP "analysis" (Score:1)
2 GB of memory? (Score:1)
Correct me if I'm wrong, but isn't it true that Windows NT is limited to 2GB of RAM and 2GB of virtual memory. This was a question I used to ask during technical interviews for NT administrators.
Todd
Every 45 seconds, another arrest for Linux. 695000 last year. It's time for a change.
Not all that bad (Score:1)
All this means is that Linux is not ready to step up and take on the work of big-iron IBM/HP/Sun machines. Neither is NT (but people attempt it anyway) Even if Linux was technically able to take on the big iron, there is no service organization in place to back it up. 1. Who out there is offering Linux Downtime(Uptime) guarantees? (The system won't be down more than x hours/year) A: Nobody 2. Can you contractually get a Linux service technican on your site within an hour of having problems if you need to? 24 hours/day 7 days a week. A. No. You may find individual consultants willing to take this on, but what if they're not available when needed? This doesn't mean that Linux has no place in the enterprise. There's alot it can do that doesn't require the level of technology and service needed for "big iron". BTW, where they testing 2.0 or 2.2 SMP?
The Straw Man Fallacy (Score:1)
They argue that Linux isn't suitable as a replacement for massively scalable systems.
But few, if any, are claiming that it is.
Linux is primarily catching on as a workgroup, or light-to-medium duty web server, and is very well suited for such a role; better suited, in fact, than Windows NT.
By changing the subject, and supplying a convenient straw man to knock down, D.H. Brown avoids having to confront Linux's strengths.
If people decide they have a need for a massively-scalable Linux, it will come to pass. For the time being, to argue against Linux on this basis is intellectually dishonest at best, and out-and-out FUD at worst.
Cut the BS. Admit that Linux has weaknesses. (Score:1)
Linux does SMP approximately as well as Windows NT, as of the 2.2 kernels. That's nothing to write home about, compared to Irix or Solaris.
Linux distributions do not yet conform to a base standard filesystem or level of library functionality (LSB standards). That's bad for interoperability. One hopes it will be fixed.
All of these can be seen as weaknesses. Or, if you are so inclined, they can be seen as goals.
About the only thing which the above are NOT is evidence of Linux being mature. Note that the BSDs are no different (AFAIK) in this respect, save for the standards (only one distro of each).
Whether DH Brown was being misled or just didn't dig deep enough, they should not have said that NT is a contender. But the fact is that Linux isn't either, if you want high-end scalability *today*.
Huh? (Score:1)
Mixed report (Score:1)
The reported limit of 2GB for Linux is absolutely correct, as is the lack of a journaling filesystem.
The SMP issues are a lot more hazy than that. It is true that there are few benchmarks at the moment, but nearly everyone who actually tries it has found Linux to be better than NT.
As for stability, Linux wins hands down (benchmarks or not). All you need to do to prove that is run one of each system, and see how many times you reboot. Personally, I've found Linux to be more stable than NT or Solaris (In my last job, we replaced solaris with Linux-Sparc and cut reboots to 1/3).
Fortunatly, the shortcomings are being addressed. The ext3 filesystem (already under development) will have support for journaling and ACLs. The memory issue is under discussion now, to be tackled in 2.3. Several HA projects are underway. In reality, Linux supports it now, it just doesn't have a hand holding setup script for it.
Lies, Damn Lies, and Statistics (Score:1)
1) Linux doesn't do SMP well.
What version of Linux are they testing? 2.0? 2.2? When it comes to SMP, there's a VAST difference.
2) Linux only supports up to 2gb of RAM.
In its current form, yes. Linux only supports 2gb of RAM. However, I'm willing to bet my left maple nut that 4gb is on the TODO list for 2.3.
Beyond that, the article was mostly rhetoric and jargon designed to confuse and mislead people.
Arrgh! (Score:1)
That's why I think it's irresponsible for people like D. H. Brown to make statements about Linux vs. NT's reliability strictly based on feature lists. The conclusion of the report is "Linux is less reliable than NT", and that's all most suits are going to see. I'm sure I'm going to see this "study" quoted as evidence that a single NT box is more reliable than a single Linux box, something that is patently absurd.
I'm not even convinced that a HA cluster of NT boxes is going to be more reliable than a single Linux box; I've seen too many examples of HA-enabled systems on NT (such as domain control and WINS name services) having lower reliability due to global failures. (For example, a corrupt WINS database becomes no less corrupt when it's replicated.)
I would much rather have them talk about real high-end Unix's HA features as well as its high stability, NT's HA features and its low stability, and Linux's reputed high stability (anecdotal evidence is evidence, even if it's not that strong) and its lack of HA features. That, at least, would be honest, and would give people like me a leg to stand on when fighting the suits on NT deployment.
2 GB of memory? (Score:1)
In my experince 2 gig under NT is a waste of money. This is especially true when you're running a file server. It may be able to address it, but it sure as hell doesn't do usefull caching with it.
Re: Pure BS!!! (Score:1)
At CNET [mailto] Or At DHBA [mailto], take you pick.
--
Captain Bitchery has a go (Score:1)
(sorry, this is a BIG bee in my bonnet)
We have a Tolken Ringpiece network with OS pooh Lan Server, soon to be replaced by NT.
At our site where the NT "upgrade" to the networ
has been implemented, nothing works.
No one can log on to the network.
Never mind, Nt is the "next big thing"
My point? Don't spend all day in the pub and go on slashdot.....
but seriously, what planet are these people on,
touting NT's "reliability"?
The ATM at one of the bank branches here runs NT and it crashed a couple of weekends ago. I took
pictures
Anti-FUD... (Score:1)
The *only* comparison they make is the amount of memory Linux and NT can use. Every other point in the article is "linux sucks", but no specific superior alternative is named.
about 3.5 GB in 2.0.37pre (Score:1)
is supported (and I believe work is being done to support the kludgy stuff that lets xeons
do up to 64G in a sane as possible way)
Oh, and I believe Linux has been succesfully
booted on a 64-cpu ultrasparc. That's certainly more than NT will do for a very long time
I know, using them efficiently is a very
different matter, but what I do know is people using multiple 6xppro linux systems happily in heavy production use. It might not be optimal
with > 4 cpu's (yet), but well, NT isn't
either
Welcome To The Animal Farm (Score:1)
No. We can accept criticism. But the people who are criticizing need to know what they're talking about.
Though it's only a summary, with the associated lack of hard data, there are several holes and errors in there.
They're addressing issues that either are already being worked upon, or are issues of the hardware platform being used, and not the software.
Shoehorn Solaris onto SMP x86 and we'll see how well IT handles high end computing tasks.
They're comparing several OS'es on myriad, completely different hardware platforms. That's apples and oranges the last time I checked.
Chas - The one, the only.
THANK GOD!!!
Linus/DH Brown/Microsoft debate (Score:1)
You want benchmarks? Then why don't you whining babies download some benchmarks toolkits (I believe you can get your hands on the code for SPECweb96, etc.) and have someone maintain a credible depository for these results. Once you guys can demonstrate real world performance, it will be easier to match up against the bigger guns. Then you can brag a little more.
I am sure Tony Iams' panel discussion with Linus is going to be great. Tony is incredibly knowledgeable when it comes to operating systems (believe me, he tests them all extensively over and over again right in his office). I think they still have an AS/400 vs. NT report where he SLAMMED NT's capabilities. Funny thing is, M$ takes his analysis VERY seriously. I know this report was NOT an intentional ding on Linux. I know some of you have a hard time with real criticism, but that's the truth. I'd take it as more of a challenge to improve upon what Linux is now, and use it as a roadmap.
Why don't we make Linux better? (Score:1)
The facts... correction (Score:1)
The reason for this is that all physical memory is constantly mapped by the kernel and therefore can instantly be accessed without modifying the page table. Mapping it only on demand would allow more physical memory, but the consent on the linux-kernel mailing list was that this would suck performance-wise.
It may be possible in the future to use additional (unmapped) phys memory as a ram disk (which can be used as high priority swap).
All this is not an issue for 64-bit platforms as the virtual address space for these is really vast. Although at above 4GB the problem arises that PCI cards without 64-bit addressing won't be able to DMA into/from memory above 4G. The solution is to use memory from below 4G (just as with ISA DMA from below 16M), but there is no kernel support for that yet.
intel? (Score:1)
2 GB? I thought it was 8GB (Score:1)
2 GB? I thought it was 8GB (Score:1)
Andrew
--
2 GB of memory? (Score:1)
2 GB of memory? (Attributed to wrong OS) (Score:1)
Anyone who's done a significant amount of Win32 programming knows about Windows' "4GB limitation". It seems that our dear researchers have, yet again, listened to marketroids as opposed to doing Gasp! "actual research".
The Win32 memory architecture has a 4 gigabyte "Address Space". Which, would seem to hint that an application can use 4 gigabytes of memory. Well, if could, except for the upper 2 gigabytes are "reserved for system use" and the lower (I think) 4 megabytes are typically not used, as applications are typically loaded at the 4 megabyte mark in their own personal "address space". The entry address is passed to WinMain() (The Windows application entry point (as opposed to main() on Unix)) as the "Application Instance". It's normally 0x04000000, or four megabytes.
But no, as an application, you can't access more than two gigabytes. That's all that's available in user space. Windows memory management does some significant voodoo in the background, but if the system has a true four gigabyte virtual playground, why can't an arbitrary application see it?
Funny how they never said why Linux can only access 1GB "under certain conditions". Maybe when you only have one gig installed with no swap? :) Seriously, though. What about Linux on an Alpha or UltraSPARC or MIPS R4000 or higher? Still a 2 gig limit? I don't think so.
I'm really beginning to hate the media and the web in general. Too many clueless bastards out there know how to type.
The following sentence is true.
The previous sentence is false.
No, it's a bunch of lies (Score:1)
I would assume that 64 bit processors have a similar arrangement... ie 2^63 for RAM and 2^63 for hardware addresses.
TUXEDO is coming for Linux (Score:1)
I was at the BEA users conference in February, and at the advanced topics in TUXEDO, the PM for that product announced that they were releasing TUXEDO for Linux sometime this summer. In fact, they are considering going one step further. They are considering giving away a single user SDK license along with allowing you to download TUXEDO off of their website. This will let developers work on TUXEDO on their home machines, altho load testing on such a setup it out.
They started looking at this after buying Tengah and seeing that allowing the DL of WebLogic generated large amounts of interest. I for one was quite excited to hear the news, I can't wait to grab it!
This brings up important issues (Score:1)
Venting some steam about this Brown thing (Score:1)
Well, gee, if the purpose of this study wasn't to gather hard data, what was it? For a thousand bucks, I'd expect a little actual research. My box has been up for 157 days, primarily because I am too lazy to install the new kernel. I wonder if any NT box has ever reached that.
LVM (Score:1)
I think there are better approaches to the same problem:
I like Sun's approach to LVM. In AIX, I have to deal with LVM whether I want to or not, and it really makes disk and system management significantly more complex.
BTW, the CMU group that did AFS has come out with CODA, a successor that also offers some neat new features like disconnected operation. It's worth looking into. They are concentrating on Linux and NT clients and servers.
(NOT) Lack of journaling filesystem an excellent (Score:1)
The risk of data loss with JFS is not hypothetical: despite all the journalling, I have lost data on JFS volumes and whole JFS volumes even without hardware failures or sysadmin mistakes. On the whole, JFS doesn't look any more reliable to me than ext2, but it sure is a lot slower. Why do you want to pay a big overhead for each file system operation if you can simply run a simple, efficient fsck at boot time on the very rare occasion that the system wasn't shut down cleanly?
This is a particularly relevant question for AIX systems, because they are very stable. Left alone, AIX servers will run for months and years doing whatever they are doing. When they do go down, it's for hardware or software upgrades, which require some extended downtime anyway. Making the file system slow in order to save on an fsck under those circumstances is a bad tradeoff. And AIX machines boot horribly slowly anyway because of the way their SCSI subsystem is implemented: an AIX desktop workstation takes 8 minutes to boot, and large servers can take literally hours.
Data warehousing applications usually use databases anyway, and those go to the raw disk for best performance (DB2 on AIX does). Those databases do their on transactioned updates.
Thinking that you can make individual nodes robust by twiddling with the file system is outdated mainframe thinking. The only robust, reliable way to safeguard your data is to use a distributed, redundant storage architecture. That way, you are protected against hardware and software failures. And you can concentrate on making the individual nodes fast and simple.
Venting some steam about this Brown thing (Score:1)
You can say NT is threaded, reentrant, modular, and has all the cool feature you want, but when it comes down to it, it is not stable. And that is all that matters. Feature are one thing that sells to business people, however, it is stablity that makes a system great.
Linus/DH Brown/Microsoft debate (Score:1)
From what I've heard from a friend of mine who as a dual CPU machine, Linux 2.0.x kernels was on par with NT 4.0 for scalability. With the 2.2.x kernels, Linux scales noticeably better. I don't know anyone who personally has a quad processor Linux box, but I've seen personally that NT 4.0 doesn't seem to get big improvements in performance going between dual and quad processors and the anecdotal evidence I've seen is that Linux 2.0.x also scales better on four processors than NT 4.0 and that Linux 2.2.x is an even bigger improvement over 2.0.x for four processors than for two.
DH Brown complains that there aren't a lot of published comparative benchmarks for Linux. But they are supposed to be a research organization. Why didn't they test it themselves? Why does news.com echo such uncredible criticism without question?
Instead they seem to increase the confusion by mixing the comparison between high-end *nixes that at this point are with little doubt more scalable than Linux (although Linux is gaining), with some of the feature comparisons (memory address size and journaled file system) of NT to imply that NT is also more scalable than Linux.
As many people have probably already noted, a journaling file system is already under development for Linux (I believe it is being written by Stephen Tweedie). From my experience with NT, I'd have to say that its 'journaling' file system is certainly not on par with AIX's JFS or Veritas on Solaris either. I've lost data on NTFS due to corruption. I've never lost data under ext2 except when I've had the whole hard drive fail. I've also seen NT take as long or longer doing 'file system checks' which theoretically shouldn't be necessary under a journaled file system. I certainly have never seen that happen with AIX's JFS or Veritas on Solaris.
As for the memory limitations, they just aren't a big deal for most applications. Linux is certainly on par with NT Server 4.0 here, and in all reality with a kernel recompile is on par with NT 4.0 Enterprise Edition. Frankly more than 2G of RAM (or 3G with a recompile) is about all that is realistically possible on x86 hardware. NT can't even take real advantage of 64-bit hardware yet, which is an area where Linux beats it hands down. Sure, the commercial *nixes beat Linux, but even NT 4.0 Enterprise Edition is an order of magnitude more expensive than even the most expensive Linux distribution without even comparing client licenses.
I wish I had an SMP machine so I could run and/or write the benchmarks myself.
Would I like to see what few limitations are currently in Linux lifted? Sure. Do they really matter to most people? No. Do I believe that the limitations in Linux will be lifted before the limitations in NT are fixed? Yes.
The tone of the news.com article is unecessarily negative when it doesn't need to be. An article which concludes that Linux nearly matches costly solutions for a fraction of the price could easily have been written from the same (few) data points that were presented in the article.
Linus/DH Brown/Microsoft debate (Score:1)
Actually, I believe you have to pay to be a member of TPC or SPEC to get the benchmarks. In order for people to take your published results seriously you certainly have to pay a big-6 type accounting/auditing firm to validate them. And in order for them to be taken seriously you also need to have the money to buy high end hardware, or be high enough profile to get a vendor to loan you the hardware to do the testing.
And it isn't Linux enthusiasts that are complaining about lack of benchmarks, it was DH Brown if cnet.com can be believed. You say that it takes a "LOT" of money for a small company to do the benchmarks, and you call us whiners because we as individuals can't come up with that kind of money? DH Brown is in a lot better position than we are to pay for numbers.
Since we haven't seen how DH Brown's report is worded other than what we've seen reported on in other sources (because I for one can't shell out $1000 for such a report), it may be unfair to be critical of DH Brown, but you can certainly point at news.com for purposely biased coverage of said report. If DH Brown has the integrity you seem to believe they have, then they should be a little miffed if news.com is misrepresenting their findings if it wasn't their intent to ding Linux.
uptimes (Score:1)
With that many boxes, there are probably plenty that aren't configured very well. Where there are much fewer BSD boxes, and there's probably a good chance that the sys admins know how to set up there software better.
Well, either way, Open Source is beating the pants off everyone else.
P.S. I like your nickname. DragonBallZ reference right?
uptimes (Score:1)
Linux, though, has the record for longest uptime ever (730 days, 14:16 minutes and counting)
So much for commercial Unices being better (though no one has a BSDI box).
this is not fud (Score:1)
This would render their own os's needless. We need companies like VAR, Redhead or Suse to give out _hard_ numbers and produce some missing tools.
Who would like to put SAP's R3 on a system which is not respected as high end (yes it happened - NT - I know).
By the way, are there any usable (non zdnet) benchmarks for linux+apache(modphp+modperl) versus
nt+iis+asp+vb?
I see your point (Score:1)
This shows one weakness of linux in this field. For instance go to spec.org and do a search [spec.org] for operating systems.
Results:
linux: 0
windows: 257
out of 2314 records.
not hard numbers (Score:1)
Thats another point, _will_ it crash, are there hard facts?
It should be simple, combine sql and iis for dynamic web-pages and fire up a hell of clients till it smokes. Why haven't I seen someone doing this tests.
Is it true that under heavy loads the NT-"console" will become slow as hell and nearly freeze? Someone pointed out that this has something to do with the kernel scheduling...
Back to the numbers, I actually have seen some, but they wouldn't be considered as hard facts from everyone due to their origin: http://perl.apache.org/bench.txt [apache.org].
2 GB of memory? (Score:1)
Now, Windows NT "Enterprise Edition" can address up to 3 GB of memory per process with 1GB reserved for the OS on Intel platforms. On Alpha platforms use can use what Microsoft calls Very Large Memory or VLM to get pointers into memory aobe the 4GB mark. Such memory is not paged memory and you must actually have more than 4GB of PHYSICAL RAM to use VLM.
Hope this helps.
i think he wasn't asleep (Score:1)
if someone *has* to know exactly what the microserv said on Bill Gates' old partner's web site, a little work can find it.
ergo, he's insightful not sleepy
Linux vs. NT5... er Windows 2000 (Score:1)
Um, from my understanding of NT5/Windows 2000 they're getting rid of the PDC and BDC -- i'm not 100% sure on that, but what i seem to be hearing.
Anybody got any comments?
Not to defend D.H. Brown, but -- (Score:1)
BTW, how many CPU's are in Dave Millers Ulrapenguin box [linux.org.uk]?
If they want to compare Linux' scalabilaty with Solaris, then that's fine with me, because I don't see Linux running on a 10000 yet, but to say that NT has a better scalabilaty is laughable at least.
uptimes (Score:1)
What me worry? (Score:1)
Venting some steam about this Brown thing (Score:1)
2 GB of memory? (Score:1)
Anyone running the kind of application that requires that kind of memory is going to be running non-intel hardware anyway, like you are.
2 GB of memory? (Score:1)
You've got to admit tho that even for active servers on Intel hardware, 2+GB is rare.
2 GB of memory? (Score:1)
This is kind of like the old Beowolf arguments. I just don't see a lot of people doing that so why use it as a comparason of operating systems.
FUD or critisism? (Score:1)
There is a long road ahead for Linux. To walk this road it needs all the [constructive] critisism there is. Would you rather see only rosy "Linux is just a wonder!" articles everywhere? Besides, many of the things mentioned in a News.com article things are true.
Flaming up whenever someone says Linux is not perfect does not do much good. Looking at the ways to change such oppinions is a much better approach.
It would be nice, however, to look at the full report.
Intel-based Linux will stay "in the realm of toys" (Score:1)
Anti-FUD... (Score:1)
The author (D.H. sth) addressed commercial ready packages, not your self made solution (fake, mon, some shell scripts). From that perspective, Linux lacks support for such features.
Also, the author did not compare Linux to NT, but to other operating systems, such as Solaris. I think it was Linus himself who said not long ago that Linux' SMP capabilities will be comparable to Solaris in 3 years or so. It currently scales to double (maybe quad) boxes for IO bound processes. But that is all.
You lack fundamental knowledge in respect to the Alpha platform. If you would follow linux-kernel, you would know that only 1GB is currently supported on that specific platform. There are some people working on this problem.
No, they don't say "Linux sucks", even if you would like to read that out of context. It is a comparison between high end operating systems, you should be proud that our Linux has entered that realm.
No, it's a bunch of lies (Score:1)
DH Brown doesn't like NT much either (Score:1)
Read their report before attacking D.H.Brown (Score:1)
1. Linux is great for: small file, print, and web servers; appliance-class systems; ISP's; computer nodes in Beowulf clusters.
2. Kernel 2.0.36 has poor SMP abilities.
3. Linux can only access *files* (not memory) up to 2 gigabites is size (Tru64 UNIX can access files up to 14TB in size)
4. Kernel 2.2 SMP should handle 8 processors, but there is no field evidence showing that Linux programs can properly handle multiple processors.
5. There is no redundant high availability (HA) clustering for Linux (even NT offers HA clusters with Microsoft cluster service). Beowulf does not help here because it was not designed to be redundant.
I could go on, but better that you read the executive summary posted for free at http://www.dhbrown.com, or buy the full report for $995.
uptimes (Score:1)
2^36 == 8GB?? (Score:1)
I don't know bout the 36 bit thing, but surely adding 4 bits should give 16x the addr. range, not 2x...
Breace
Linux, Yup, It can handle alot... (Score:1)
Not to defend D.H. Brown, but -- (Score:2)
Why does it have to be bad? (Score:2)
http://www.freebsd.org/~fsmp/SMP/benches.html
At this point in development of the SMP kernel we do poorly in many benchmarks. We have been concentrating on other issues
such as stability and understanding of the low level hardware issues. As we start to work on areas that improve performance
benchmarks are useful for gaugeing our progress.
Though this may be out of date, there are notes in the 3.0 release readme that indicate that SMP is not yet done in freebsd. Comparing the two doesn't seem fair to either.
-Peter
Lack of journaling filesystem an excellent point. (Score:2)
Ext3fs is supposed to be ext2fs w/ the option to use a jounal, or to act as a traditional ext2fs.
Sounds cool to me.
-Peter
No, it's a bunch of lies (Score:2)
Actually, is the mips r4k and r10k addressed as a 64 bit architecture? If so, then that also has through-the-roof amounts of addressable ram.
Anyway, the point is that since linux is not shackled to one architecture the review is dead wrong on this point.
-Peter
process pooling vs thread pooling (Score:2)
On most OS', Linux included, creating many processes is understandably a lot more heavyweight (and inefficient) than creating lightweight threads. This isn't really that noticable under light/moderate loads, but when the server is under heavy load and threads are being spawned on a near per-request basis, you'll notice Apache's performance degrade significantly vs. Netscape Enterprise server, Zeus, or even Java web server (which uses user-level threading typically).
Here's a good paper by Doug Schmidt on web server threading models.. it's about 2 years old so Apache is probably a LOT better now, but it explains the issues & shows benchmarks clearly:
http://siesta.cs.wustl.edu/~schmidt/INFOCOM-97.
i agree (Score:2)
Of course, in Linux vs. NT debates it certainly is disappointing to seem them cast as near-equals, which obviously isn't true. (though must debates don't always have to be about NT vs. Linux)
The thing is, why haven't there been any real studies about Mean Time Between Failures of Linux vs. NT? Sure, it's a difficult subject to tackle in a controlled fashion, but enquiring minds want to know
The evidence of NT's up/downtime is mostly anecdotal as well! [or it's just marketing spew]..
process pooling vs thread pooling (Score:2)
uptimes (Score:2)
for instance, some mainframe systems have had uptimes of 5+ years.
Wrong. Learn how to read. (Score:2)
not at all, for transaction intensive business (Score:2)
They were upgraded a year later to 3 gigs of memory and 1 gig, respectively. Oh, and the second machine was JUST a developer server.
I know of a major bank that has 4+ gigs of ram for their Internet & Telephone banking computers.
Such is life in big business. $50 solutions for $5 problems.
not hard numbers (Score:2)
My anecdotal experiences tell me that overall the Linux solution would be better, but I know that IIS does thread management a lot better than apache [thread pool vs. thread-per-process] & hence typically serves up pages quicker when under lighter loads. Under heavier loads, NT will crash
Yes, it is (Score:2)
What kills me is the little proviso: "...as well as Windows NT".
That's where the FUD really spreads. Linux's strengths compared to NT are dismissed as "anecdotal" or "unproven", while NT's strengths are taken at face value.
The reliability thing is especially critical. In my view, this is one of the major things that elevates Linux over NT: its stability under heavy load. This is something that I've observed time and time again as an NT and Linux admin.
And what really sucks is that everyone is willing to do a quick search for studies on Linux's reliability (turning up nothing), but no one is willing to do the studies. So, lazy people like these ding Linux with "unproven stability", while also dissing Linux on not having side-of-the-box features like "high-availability clustering", and assume that NT is more stable because it has these "side-of-the-box" features, even with its proven instability.
The problem is that most admins prefer a single, stable box over an HA cluster where it makes sense. Sure, HA clusters are great for business-critical databases, but why should an HA cluster be needed for everything to get even basic reliability?
And the worst part? No one seems to be willing to do the studies. So Linux loses in these asinine assessments every time.
You know, I shouldn't really be pissed about this. We've come a long way already without the benefit of positive hype, and I don't doubt that Linux will prove itself in some enterprise setting and show all the naysayers. And even if it doesn't - even if it's the best-kept secret in the IT world - it'll keep going strong.
But I do get irritated at people who publish irresponsible studies like this when my bosses at work tell me that they won't trust an "unreliable" solution like Linux and force me to deploy NT instead. If they forced me to deploy Solaris, AIX, or Tru64, I'd be a bit happier. But when NT is ranked alongside these systems, I get pissed, because it isn't even in the same league.
2 GB? I thought it was 8GB (Score:2)
> it possible to do 36 bit memory addressing on the
> i686 processors. This would allow access to 8 GB
> of memory. Of course I could totaly be off my
> rocker.
Yes, but that involves gross hacks, thanks to
yet-another-f*cked-up-design-from-youknowwho.
NT mm engineers screamed "yuck" loudly, but will
add support for that because they will be paid for
the job.
Linus (and "official kernel team") won't mangle
Linux mm to support this. 3rd parties may, but
anyone needing such amounts of memory should use
clean 64 bit architectures anyway.
uptimes (Score:2)
Fud. fud. fud.
Even assuming the article is 100% correct, by not giving an alternative which is superior, you're being hypocritial, and admitting that linux is the superior solution by omission.
--
Lack of journaling filesystem an excellent point. (Score:2)
Wouldn't it be nice to never fsck again? Or, atleast, to be virtually guaranteed that foregoing extreme funkyness, data will never be needlessly corrupted.
In a journaling filesystem, changes to files are not activated until they have been cleanly completed. Thus, if half the change is still in the cache when the power supply dies, no part of the change actually occured. Many journaling filesystems also keep preset numbers of revisions in the history of a file, making it very easy to back yourself out of a mistake.
There is a performance hit, 7 to 14 percent depending on what you're doing if i remember the specs on adding JFS to WarpServer 5. But journalling filesystems are a must for many data wearhousing applications where you simply can't afford the possibility of corruption.
All in all, this article doesn't look anti-linux. It's not particularly useful, and doesn't go out of it's way to encourage anybody to use linux, but honestly, look at what it's actually saying.
There's no hard data regarding the performance scaling of SMP linux systems. Well, there isn't. Whaddaya want? publishable results are more expensive than you think. If they had done their own research in-house you would have jumped all over them for it, no matter what they said. It would have been a serious issue of credibility for them.
There's no hard data on long-term reliability of linux. There isn't. See above.
Then there's the issue of "high availability" with linux. This was an "ask
All the tools and parts are there. Somebody just has to expend the resources to build the things this report is missing.
DH Brown doesn't like NT much either (Score:2)
"Unix trounces NT" http://news.com/News/Item/0,4,29416,00.html?st.ne
Judging from the two articles, it seems DH Brown is a big fan of old school unix. So yeah, if your company can afford AIX/Solaris/etc, they seem to be suggesting you should pay for the reliability and scalability. I don't mind linux not matching up to the big boys yet...in fact, I'd rather see that at this point, this way Sun can ship Linux and help improve it while maintaining a high-end product that still rakes in the bucks. That way linux is seen as a great entry-level product going up against NT, and the cadillac is there when you start making money.
I'd rather see linux grow as a desktop OS, and hopefully the big companies will start to filter up some of the linux gifts that make it a better desktop OS (ie improvements in gnome/KDE, word processors and PIMs etc.) and maybe toss some money that way.
Why does it have to be bad? (Score:2)
And so far as the other criticisms... Enterprise computing is still uncharted territory for Linux. It's going to take time before people start accepting it as being as stable as the other Unixes... But at least it's now being considered in the same league.
SMP - I forgot where I read it - maybe on the FreeBSD site - but FreeBSD outperforms linux SMP by 17% - so obviously there's room for improvement, no?
The other limitations may only be a hardware issue (memory support, etc...) but they're still an issue. Just because Linux is hampered by the hardware it runs on, it doesn't mean that it's not an issue affecting the adoption of Linux in the enterprise workspace.
Overall, the article, in my eyes, more pointed the way towards where Linux needs to go in the future. Too bad it's on CNET where if it's not good, it's gotta be bad...
-----------------
But (Score:2)
UltraPenguin supports 2TB memory and 64 processors (Score:2)
I thought this was comparing Linux and NT on intel hardware for common applications. The memory argument starts to head into the area of Intel, Sun, SGI/Cray which I think was beyond the intent.
Not to defend D.H. Brown, but -- (Score:2)
The facts... (Score:3)
x86 Linux has a 960MB user address space limit in a default kernel. It is possible to recompile a kernel to allow 2GB of address space per process. Linux has a tradeoff between physical memory installed and user process address space, their sum cannot exceed 4GB. Hence allowing 3GB of address space for user processes limits you to 1GB of physical RAM making this kernel configuration mostly useless.
My understanding is that Linux on 64 bit Alpha encounters difficulties around 2GB of physical memory due to PCI limitations. This doesn't sound that fundamental an issue, but it isn't a simple kernel recompile to fix it. Hopefully this will be ironed out soon.
Brian Young
bayoung@acm.org
2 GB of memory? LIES! (Score:3)
WinNT cannot give 4GB to an application; that is
another lie. You need a special configuration of NT server just to have as much memory as what Linux makes available.
Secondly, Linux is a 64 bit operating system on 64 bit machines. It's a pure lie to say that some commercial UNIX can have 128 gigabytes of memory and compare that to Linux on Intel. I mean, for crying out loud, doh!
Pure BS!!! (Score:3)
Missing from Linux are high-availability features that would let one Linux server step in and take over if another failed;
I'm not an expert on the subject but I've seen the high-availability howto at www.linux.org/help, so I know it's possible. Can somebody else comment on the subject?
full-fledged support for computers with multiple processors;
uhhm, did they even test this??? Last I checked Linux beats the crap out of NT on SMP...
and a "journaling" file system that is necessary to quickly reboot a crashed machine without having to laboriously reconstruct the computer's system files.
OK, that's true. Linux doesn't have jornaled file system. On the plus side, it doesn't crash very often... NT does have a journaled file system, but for some reason it took longer for NT to test the file system after the crash then for Linux to run fsck after the maximal mount count.
Currently Linux can't use more than 2 gigabytes of memory, and in some cases only 1 GB. Windows NT, on the other hand, can address 4 GB of memory
This is either a deliberate lie or a lack of knowledge (or both). First notice that he doesn't specify the hardware. It is assumed that there is only x86 in the world. Also notice the difference between "use" and "address". Linux *can* address 4 gig of RAM on x86, just like NT. The memory is split between kernel and user memory. By default the split is 2-2, but you can change it to 1-3 (1 for kernel 3 for user) by recompiling the kernel. NT works in *exact same way* with only one difference: you can't recompile the kernel. In order to get a 1-3 split you need to buy "enterprise edition".
Now, Linux also runs on 64 bit platforms, such as Alpha, SPARC and PPC, and it can take full advantage of 64 bit memory architecture. On these platforms Linux can address 2^64 bytes of memory -- I don't even know how much that is...
NT also runs on Alpha. But (surprise, surprise!) it's memory model is 32 bit. That means that even on a 64 bit platform NT cannot address more then 4 gig of RAM!
This whole article is pure BS. Notice that they pretty much just say "Linux sucks" without thorough comparison of any kind. Does anybody know where to send email for rebuttal?
2 GB of memory? (Score:3)
Data point - I'm aware of a couple NT boxes, and one Solaris x86 with 2GB. So its more like 99.5%.
While that may be top end now, look forward a few years, and it will be much more common to see this amount of memory on x86.
I'm not really aware of the issues involved in this, but I'm sure if turns out be a problem with Linux, they can slipstream a fix in to 2.3.666 or whatever. Commercial operating systems would probably do it at a major version upgrade, which NT won't see for a while after 2000 finally ships.
--
Linux vs. NT5... er Windows 2000 (Score:3)
It's true that Win2000 junks the Domain security model, but that has nothing to do with high availablity or journaling.
(A WinNT Domain is a common list of user/groups shared by a number of computers for login and ACL purposes.)
--
It's a shame... (Score:3)
It's pretty bad that their report, news.com's, is uneven, though it highlights both strength and weakness. It doesn't explain their own reporting, much less Brown's.
Someone mentioned the pdf; is it available?
You have to pay for the real report... and there's a form I just filled out for an executive summary...
I can accept their claim that for 'enterprise computing' that other unices beat it, but not that 'Windows NT holds an advantage'. Price/performance, stability, and interoperability, from hearsay all over the web, seems to be Linux's strength against NT deployments.
Linux definitely seems to lack in robust SMP, or as they say, 'non-trivial SMP scalability', except I'm not so sure that NT qualifies. Isn't NT limited to 2 or 4 processor Intel solutions, which are themselves not quite so hot for enterprise level computing? As compared to bigger Sun or Alpha solutions? Maybe someone can correct me and tell me about a distribution of NT that runs on 32 processor Intel or Alpha machines at a reasonable cost and at reasonable performace and up time?
As for journaling, high availability clustering and such, I guess that much is true or under development... But I still don't believe that they think NT satisfies their requirements for an enterprise level computing solution!
Anyone care to correct me?
AS
this is not fud (Score:4)
The fact of the matter is that Linux's SMP support isn't on par with Solaris' (4 CPU's vs. 64), Linux's high-availability clusters aren't on par with *any* commercial UNIX (yet), and Linux's filesystems aren't journaled. (yet)
Some day (Kernel 2.4/3.0) all these features will probably be there, but let's not start touting vapourware over other solutions. Open source can only combat FUD if the code IS THERE. Right now, it isn't.
"Use the right tool for the right job" - Linux [on Intel especially] isn't it for sites that need extreme scalability & high availability. (A Sun Ultra 10k, AS/400 parallel cluster or S/390 mainframe is better suited to those environments.)
Ditto for sites that need to run a transaction processing monitor (like BEA Tuxedo) or a high end application server (like Apple's WebObjects). Though, this is changing... I think BEA is thinking of a Linux port... and WebObjects on Mac OS X Server is pretty sweet.
[ though not 100% open source, but nothing open source comes even close to Tuxedo or WebObjects in terms of performance, elegence, reusability & developer tools. Perhaps the GNUstep project will adopt the WebObjects framework as another pet project.. ]
SMP "analysis" (Score:4)
http://www.dhbrown.com/dhbrown/downldbl/linux.p
This contains FUD at higher level than that found in ZDNet.
Pay attention to how SMP and Linux is "covered" on pages 7-9
begin quote:
By boosting the number of locks to somewhere between 10 and 100, the Linux 2.2.5 kernel used by OpenLinux 2.2 should improve its SMP scalability somewhat. But while Linux 2.2 systems can boot on an SMP system with up to eight processors, useful SMP deployment at current levels of granularity has not yet been proven. Little industry-standard or even proprietary benchmark evidence has emerged that demonstrates the performance improvements of
database or Web server applications running on SMP systems under any Linux distribution. Although Linux has been tested on a variety of SMP systems, booting on eight-processor systems is far
different from demonstrating improved performance on mixed throughput workloads or multi-threaded database applications.
:end quote
Rather than doing RESEARCH and STUDY, they merely report the # of CPUs used in previously published NT and Commercial Unix benchmarks. (They do not print the actual benchmark results here). The number of CPUs used is a virtually useless comparative benchmark. Since they selected two benchmarks where there are no previously published Linux results, they report nothing for Linux. This is used to portray Linux as hopelessly inferior, without actually having to do any work. Check out how they put Linux at 0 CPUs on the graphs. I thought only Microsoft would do something so obviously corrupt and shameless.
Method: Claim Linux is inferior. Do no benchmarking yourself, but make the lack of data for Linux sound ominously bad. Put in some fancy graphs of useless values selected only for their ability to make Linux look worthless at first glance.
It is amazing people will pay DHBrown for a report of this quality.
Article link (Score:5)