Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

madddog on Linux v NT Benchmarking 108

BogoMips sent us an interesting tidbit running in Performance Computing currently. Jon "maddog" Hall explains some of the benchmarking issues associates with the DH Brown reports, as well as the ubiquitous Mindcraft tests. Very well written article, IMHO.
This discussion has been archived. No new comments can be posted.

madddog on Linux v NT Benchmarking

Comments Filter:
  • by Anonymous Coward
    The problem with this line of thinking is that not everyone is a kick-ass coder, and isn't up to the challenge of this kind of optimization. It's like saying that "The General Motors XF-3000 goes from zero to 60 in 1.3 seconds, and gets 80 miles to the gallon, but you have to optimize the frame and engine for the size, and weight of the driver, and for the road conditions.

    Luckily, Windows NT Server kicks ass right out of the box. Neat, huh?
  • by Anonymous Coward
    I would look on ebay. You can often find old ppc RS6000s for $500 or less. Get one, get a tape, get a CD, and get an old terminal and you will be in business. It will probably come with the OS. Back that up to tape right away -- and save that tape! Otherwise it will cost you bucks. Patch up from IBM's web site. Enjoy.

    If you can hork a copy of AIX from someplace, even better. Just in case.

    AIX is odd, it is IBM, but is is very easy to work with in obsenely large and loaded enviroments.

    That is what I actually did, BTW, although I haunted Rice and Univerity of Houston auctions for the boxes, as ebay wasn't around a few years back!
  • by Anonymous Coward
    Last month, the German Magazine decided to check
    the results of the last MindCraf Benchmark.

    However, they decided to test performance on
    many hardware configurations.

    The results were this:

    1CPU + 1 NIC --> Linux wins.
    Several CPUs + 1 NIX -> Linux wins.
    1 CPU + Several NICS -> Linux wins.
    Serveral CPUs + Several NICs -> NT Wins.

    99% of servers don't use only one NIC (with one or many CPUs).

    Conclusion: Linux is faster than NT in 99% of the hardware configurations.

    Meanwhile, Linus already told that he is addressing the botleneck caused by the "global lock" in the networking sub-system, that caused this problems in the MindCraft Benchmark.
    The solution (Linux 2.4) is expect to the end of the year.

    Yes, we are talking about the little free Linux, agains't the expensive proprietary NT giant...
  • by Erich ( 151 ) on Wednesday July 14, 1999 @07:26AM (#1802740) Homepage Journal
    The problem with linux and large enterprise servers is mostly in it's style of development. Most people don't have access to an Ultra Enterprise 10000 ``starfire'' and several disk arrays to just sit around and play with -- they are stuck with (relatively) cheap PC's and Suns and Alphas. Only large companies can afford to shell out millions to buy the equipment, and millions more to pay programmers to develop for it. That's why Sun Enterprise equipment is almost mission-critical-environment ready -- and linux isn't. You can't yank a processor board out of a machine running Linux and still have the thing hum along. Yet.
  • Interesting, and he does make some good points about the needs of an enterprise system. I wonder specifically what else is needed to make a OS "enterprise-ready"; a journaled file system can't be the end-all.

    Of course, most of the "enterprise" settings I've worked in have featured NT with "admins" who couldn't find their ass with both hands. It always amazes me that companies are willing to pay big bucks to people who, whenever a problem comes up, just go running off to a pay-per-incident help 800-number anyway...

    ----

  • Hell, just use Mandrake. I'm too lazy to recompile on my own, and it *does* run faster than other distros on the hardware I'm using (optimization rocks).

    ----

  • Those people who want to rant and rave when anybody says anything slightly bad about Linux should read this article.


    ...phil
  • I have enough diskspace to hold source for most of the Linux stuff I have. I want to compile it from scratch (I'm quite prepared to). Debian does not *need* to do what you are suggesting, if they fold source support properly into apt, as they've been talking about for long enough, then they will be able to support everybody, as those like us can simply do "make world" and rebuild the entire system, or even, with some work, allow all the systems to be easily supported - without the hassle of fiddling with code
  • The issue that the article was talking about was whether Linux was suitable for very high-end servers. High-end servers aren't the domain of NT, but rather the commercial Unices put out by the likes of Sun, IBM, HP, and so on.

    This is pretty much an issue of Unix flavor vs. Unix flavor. Linux just happens to still be a fairly low-end Unix flavor--at least for now.

  • Posted by MaverickPl:

    What is it??
  • ? I've tested this, and gcc is pretty good, and egcs is great on x86. I remember that when comparing it to the old Watcom C compiler (which was the best back then) integer performance was always equal or up to 30% better, and only floating point performance was lacking by a percent or two. I wouldn't mind having a faster compiler, but what we have is pretty good, and is supported on many platforms. For really speed-critical optimizations, assembler still works.
  • I agree, I love it when I see benchmarking like this. tomshardware does a good job for this sort of thing with processors, but that's most of what they benchmark.

    I benchmarked emulator overhead for a while, but gave up when I realized that the actual processor overhead by running DOSEmu is a percent or two, and just completely installed Linux instead.

    (recompiling benchmarks is also a good way to test compilers... :)
  • Recent versions of pgcc and binutils have support for K6-2,3 specific things, 3dnow for example. If you're whining about the K6(not the K6-2,3), its pretty much a P5, so you can still see a performance gain over i386 binaries if you compile with pgcc.
  • Just installed pgcc, moved gcc anc cc to gcc.old and cc.old, linked pgcc to gcc and cc. Then, went into /etc/make.conf and put all the best flags for my system in there.

    Now, to install almost anything, and have it completely optimized for my system, I go into /usr/ports/, pick the application, and do a "make install distclean" and it's installed seemlessly and optimized for my K6 :-)

  • I think if your going to recompile your kernel with optmization, that should be done long before your throwing the apps on it and it's a widely used server.

    I was refering to _every_ level of software, I suppose Apache would be the most obvious example with all the httpd comparisons going on lately. It's not just the kernel you can optimize.

    And you don't need to make the changes instantly, you can do make, and then later when your ready (after a backup, and work has stopped for the night), do the "make install."

    If your into the Linux "kernel of the week" syndrome, well, optimizing or not doesn't make any differance in how often you reboot, only the speed after the reboot.

  • "I also read someplace (sorry, don't remember where) that Intel's own compiler was the best x86 compiler around (for optimizing code)."

    If that is true, Intel could make a great marketing move. They could recover a lot of the "Anti-WinTel" fallout where people are moveing to Linux/AMD by open-sourcing thier compiler and helping merge it into egcs. It's like $700 now though? But, if you could make absolutely outragesly fast binaries for Intel CPU's out of normal open source apps, they would gain a lot of support in the Open Source community.

  • by BadlandZ ( 1725 ) on Wednesday July 14, 1999 @07:41AM (#1802753) Journal
    If you use FreeBSD, you simply uncomment one configuration line, add 2-3 flags, just one time, at the very beginning.

    After that, to install, all you need to know is three words, "make install distclean", and even that third word is optional.

    It does bring up the point that it should and could be easier. What is really needed is an expantion on the basic "uname" call, to include much more system information (specifically, the CPU, amount of memory, etc..) Then, it would be possable for open souce compilers to just make it one generic flag (like SGI's compiler does, -Ofast) and it would grab the system information for you and figure out the best flags on it's own. That is a realistic possability, and if something like POSIX, the LSB, or some other standardization body would implement this type of system call to get hardware information, it could potentially benifit the UNIX community in a way that Microsoft can't keep up with.

  • by BadlandZ ( 1725 ) on Wednesday July 14, 1999 @07:09AM (#1802754) Journal
    I am still bothered by the fact that all these benchmarking tests and comments don't clearly address one of the basic advantages to open source. You can optimize the binaries for your specific system, and get rid of the 386/486 limitations... Sure, you loose binary compatiblity, but who cares when they can get thier own source and compile it too, and your system sees a 30% boost in speed?

    System tuneing is IMPORTANT. Important enought that it can make one OS faster than another. I think we should be pointing out that open source is much more tunable, not only in the ability to modify the code, but also in the ability to optimize it for specific hardware. (quick note on some stuff I tried to see the diffrence, click here [current.nu])

    Why can't someone do some intellegen testing on this, and give credit to the people who REALLY make GNU/GPL and all of open source a success, the folks who write the COMPILERS! Linus did write some nice stuff, as well as many others, but without the right compiler, it's worthless.

  • by jnik ( 1733 ) on Wednesday July 14, 1999 @08:53AM (#1802755)
    Linux right now doesn't scale horribly well to huge enterprise servers. But it still manages to replace them fairly effectively, by allowing scalability on the level of the machine instead of components.

    If I can do the work of a big enterprise server on four P75's, that's still a savings. Linux is proving that for some tasks, a cluster of small machines works just as well as one huge machine. And there's reliability projects in the works as well based on the same principle: have an extra cheap machine which grabs the load if your main server goes down.

  • The only problem is that there are no decent compilers for Linux. Any Microsoft, Borland, or Foo Inc. compilers designed for the 386 will easily beat pgcc on any day...I don't even want to think about vanilla gcc. Plus, if I understand correctly, other Linux platforms like Alpha are even worse off than i386 is re: compilers. Forget Office2000 or some fashion of DirectX; I would like to see Microsoft port a C compiler to i386 Linux.
  • These are examples from a large bank

    IBM parallel sysplex running OS/390
    IBM SP frame running AIX
    Sun Starfire running Solaris
    HP V2500 running HP-UX

    NT doesn't even come to the knees of one of these systems.
  • That's why Sun Enterprise equipment is almost mission-critical-environment ready -- and linux isn't.

    You speak of the two as if they're mutually exclusive. I've had Linux running on a Sun Ultra Enterprise 4000 for some time now, and Dave Miller has had it running on a 14-CPU Ultra Enterprise system (provided by Sun, specifically for Linux development, AFAIK).

  • BadlandZ did not said "everybody have to optimize".

    he said, that with open-source software we can optimize for specific hardware we have.

    if someone is satisfied with working system, it's OK. if someone else is satisfied only with optimised system, it's OK too if he can do such optimisation.

    it's about possibilities not about what you have to do.

    so to be more precise: with proprietary software without sources you get only those binary forms supplied by manufacturer (thus if code optimised for K6 is not available, than you as owner of K6 can't do anything). but if you have source code and compiler capable of optimising for K6, that you are happy owner of K6.
    (note: K6 is just example - Alpha, PPC, PIII, ... are issues too)

  • such answers you mentioned are very similar to extortion.
  • Just to make sure people know it's available, there's always irc.linux.com(irc.openprojects.net) which will have good (if not great) Linux help at #linuxhelp and #linpeople
  • pssst buddy... he quit Compaq already :)
    jon@valinux.com iirc
  • Agreed, however after reading it twice they still wouldn't understand.

    Yes my OS doesn't have the fastest GUI, and it wasn't coded by 4000 paid employee's, and hell it might not be the best... but you know what? I didn't pay a damn thing for it. So who cares.

    If I drove around a station wagon with a cracked windshield, that I got for free... I wouldn't bitch one bit. I would keep driving to work with the biggest damn smile on my face. Just laughing on the inside to all those fools who had to pay for their cars.

  • What does this have to do with anything? We already know Linux is more reliable than NT in general.

    So, why would you spend more money to get something that is less reliable?
  • Jon did mention what is needed: the ability for the system to say up and available 24x7 REGARDLESS of disk failures, CPU deaths, and motherboards frying.

    Linux is good at the low-end server and desktop role, ans Jon and DH Brown state. However, I don't think your company is willing to run it's General Ledger, Web Server, or other critical systems on one of the Sys Admin's PCs. If they are, you need better Line-Of-Business people to whack upper management around a bit.

    Linux is good. It's just not good ENOUGH yet.

  • If you use FreeBSD....make install distclean...

    And then you reboot, and your 24x7 server is no longer up 24x7, causing the developers who were on a 3-day rollout schedule for their new program to lose a whole load of database changes cause you rebooted the machine before this evening's backup and all the table changes that were made but not documented are now lost.

    I can make tuning changes on the fly on the SUN Enterprise servers we run here. Linux should be able to, since it's already a modular kernel. But it doesn't, so we have downtime.

    That's really the final result we need to aim for: 24x7 availability, even if the whole freaking computer dies.

  • And then you reboot, and your 24x7 server is no longer up 24x7, causing the developers who were on a 3-day rollout schedule for their new program to lose a whole load of database changes cause you rebooted the machine before this evening's backup and all the table changes that were made but not documented are now lost.

    The assumption is that someone who is trusted with root access will be cluefull enough to know not to randomly reboot the machine in the middle of the day. The admin would know when the backup is (or could force it early). Generally, a code server does not have to be 24x7, and the admin would reboot it on a sunday at 4 am when no one is using it, and would warn everyone well in advance in case anyone was. If the developers were on a 3-day rollout schedule and did need 24 hour access, the admin would know and not reboot the machine. He would probably also wait to reboot the machine until he had to for some other reason, like a hardware upgrade.

    I can make tuning changes on the fly on the SUN Enterprise servers we run here. Linux should be able to, since it's already a modular kernel. But it doesn't, so we have downtime.

    If the part you want to tune is in a module or other component then you can. Obviously a SUN Enterprise server is better at this than a 486 running linux -- that was basically the point of the article.

    That's really the final result we need to aim for: 24x7 availability, even if the whole freaking computer dies.

    That's actually much easier to do than allowing hot-loading of kernels or processor board swaps. That's (part of) what clusters are for.
  • RISCy Business sez: "Linux has no place ... as your 1,000,000 hit/hour webserver."

    And just how many webservers get a million hits per hour?

  • Phil's post is exactly true - the hormonally addled 14 year olds who got posted on Mindcraft's "Linux Rage" page and made the entire community look juvenile in front of the world should read Maddog's article. At least twice :-)
  • The one thing about this post which redeems itself as being not a flame is the following line:

    /* That's the way it is. Don't like that? Go work towards changing it. Change is good. But till things change, what I say will hold true. */

    So this isn't FUD because it's open to the idea that Linux will improve over time.

    The fact is Linux is, for lower end machines, excellent. Linux on an Intel against Solaris on a Sparc of comparable speed, for lower end stations wins hands down in ease of administration and (with the porting of Oracle, etc.) gaining rapidly in number of applications.

    But until Linux develops a journaling file system (a real one: think AIX, not NT) or more scalability in SMP, clustering and HA (think VMS, not Janus), it will not take over the datacenter, which is where the real money and durability is.

    That having been said, I disagree with the poster as to the "inevitablity" of it happening -- I think Linux, *BSD, etc will improve because you can't stop the desire to make free software better. Plus Free Software never will go out of business, by definition.

    Give Linux 2-5 years to develop good HA and clustering. Give it 5-7 years more to get a reputation to compete with AIX, UNICOS, and OpenVMS in the datacenter.

  • by Locutus ( 9039 ) on Wednesday July 14, 1999 @08:37AM (#1802771)
    There was a reason to cry 'unfair' at the first benchmarks from MindCraft. The tests were plan wrong and very little could be gleaned from them. The third test ( second test was private and the results weren't published ) provided information which allows corrections to be made.

    I think you'll someday find that Microsoft is feeding the press with data to show Linux's weaknesses. That is how they 'compete'. Unfortunate for them, they aren't 'competing' with IBM, its Linux. Weaknesses will be patched quickly and tested by the community too quickly for Microsoft. They won't be able to wait for a liquid cooled CPU to become the norm so NT v5/2000 can beat Linux in future tests.

    Fair benchmarking and reporting is not the norm in this industry. Recently IBMs Warp Server for e-business was hammered on InfoWorld (the link is now broken to the article....). It turns out the guy who wrote the column is the Senior Contributing Editor and Columnist of Windows NT Systems magazine. See "Hatchet Job" [os2hq.com]

    Complaining/exposing a injustice is how we open the eyes of the unknowing. Example: At my stock investment club meeting last night, one member insisted that NT was faster then Linux in all cases and that Linux had a weak GUI. I booted OpenLinux v2.2 on my P120 laptop an he started questioning his beliefs. The rest of the goup was surprised at the polish they saw. All but one member are professionals in the technology industry though mostly embedded/realtime systems.

  • ...(optimization rocks).

    Yeah, when it doesn't hang your system. Mandrake is a good example of this.

    Mandrake 6.0 has a hdparm line in /etc/rc.d/rc.sysinit, which optimizes HD performance on most systems. However, on some systems, it causes the HD to hang.

    By booting without starting up init (linux -b at the LILO prompt), I was able to find and comment out the hdparm line. However, since the system hung during the first boot after install, the RPM database got trashed. I ended up having to completely re-install it again. That was a pain.

    Mandrake should not have put in an unnecessary optimization in as a default. Looking at the discussion about this on their mailing list, they didn't seem to care, though.

  • That is true, but NT has a big head start. No, 8 machine failover is not stable, but NT at least ALMOST has a real presence in the high-end market. Linux hasn't even truly begun to concentrate on the "big iron".

    If we are "going to get there first", we^H^H the Linux develop team have their work cut out for them.

    ----- if ($anyone_cares) {print "Just Another Perl Newbie"}
  • s/8 machine failover/the necissity of 8 machine failover just to run one server/
    ----- if ($anyone_cares) {print "Just Another Perl Newbie"}
  • This is kinda ridiculous. Why is it that corporations are expecting the free(Linux) to be as "enterprise server ready" as the expensive(NT)?

    Linux, as far as I know, wasn't designed with that in mind. It was originally Linus' hobby, for crying out loud. Also, it is a very young OS. It is right now good at what its intended use is... squeezing good use out of lower-end, and sometimes antiquated PC's.

    I mean, try to make a web/mail server for your corporation with Windows NT on a leftover 486/16MB. Can't do it. But I did that with Linux, and it runs fairly fast for its hardware handicap.

    But it has not yet been coded to do what NT is aimed at... taking over the high end server market. Linux developers (whom are gentlemen that deserve our utmost respect) are now beginning to seriously address Linux' lack in this area. I imagine that since Linux is already king in low-end servers, now it will turn its attention towards the higher-end market. Thus mindcraft and company have helped shape the development of Linux in a good way... instead of us madly shouting "unfair benchmarks!", we need to simply begin working towards what NT has already claimed... higher speed on the big iron. Hopefully we can do so without the bloat and instability that has plagued NT, though.

    ----- if ($anyone_cares) {print "Just Another Perl Newbie"}
  • If /. DID run on FreeBSD, that'd be a good reason to steer clear of it. /. is one of the slowest sites on the Net, period. The performance is absolutely laughable.

    I think that most of Slashdot's performance issues are related to lack of bandwidth. If Slashdot is maxing out its pipes, no amount of hardware or software thrown at it is going to fix that problem.

    Processing wise, Slashdot does a lot of dynamic content generation and is written primarily in Perl, which is interpretive. If memory serves, Slashdot runs primarily on a single machine and not a huge or expensive one, with a second machine that only serves up the graphics. Additionally, I believe that the database engine is running on the same machine as the web server. Serving up the graphics seems to be the slowest part of the whole system. If Slashdot used multiple graphics servers (preferably on seperate pipes), it might help matters.

    I believe a significant boost in dynamic content generation performance could be had on Slashdot if it used a seperate database server box connected to the web server on a dedicated 100Mbit Ethernet. Moving to this sort of architecture could also allow clustering of the front end web servers. However, as I said before, if the bottleneck is bandwidth, that won't make much difference.
    All in all, I think Slashdot does pretty well considering the limited resources it has had to work with. Now that it is owned by Andover, perhaps they will be willing to put some investment into infrastructure.

  • LDP = Linux Documentation Project.
    You can find a link to it from kernelnotes.org
    There's a mind bendingly huge amount of documentation, HOWTOs etc there. Happy hunting !
  • You must be doing something way wrong, I have seen speed jumps in rc5 for all the chips I have gone up. From a P133 to a PII 500. All have gone up for rc5.
  • Hellya. It was calm, reasoned, and better yet, had suggestsions for the Way Ahead[tm].

    I think things are going to get very intresting very soon.

  • it[linux] has not yet been coded to do what NT is aimed at... taking over the high end server market.

    we need to simply begin working towards what NT has already claimed... higher speed on the big iron. Hopefully we can do so without the bloat and instability that has plagued NT, though.

    How you can say those two statements together is beyond me. Do "enterprise" and "instability" go together? Face it. NT is not there either. We've got to make sure we're going to get there first.

  • Well I am just thankful that not everyone say for example debian uses this optimization for pentium processors. I really liked the spirit that is reflected in the articles and numerous pieces that were done for the early versions of the linux gazette they were quite informative and very useful. The whole point of linux/open source I think is it is the only mode that people like me who have crappy computers can still get SOME support. This is something that Microsoft does not really know how to do. Why should I replace my hardware because someone updates their OS?
  • Good at least allow me to use standard methods of getting precompiled packages without having to do things the hard way and for my 486 to suffer and die on the vine. If people are going to be using hd's at >=22Gb in the future (and I have never had anything personally that was >400Mb on any of my machines). Basically optimization should be left up to speed freaks and others who want to benchmark NT. What do I care if microsoft takes several software engineers away from critical parts of the OS and makes them tune every last processor instruction for PII's to make it work better? If NT worked on my 486 was cheap and had all the same applications that I have now then maybe. Basically I use it because there is more applications that are bundled with the package. If you really look at it in the basest terms any Micro$oft operating system gives you the OS. Linux/*BSD varities give you that and a huge plethora of other things to make you able to start work right away without even having to leave the desk. To me that makes it even a better plus that the price. For that level of convinence linux could retail (with the level of marketing that other companies put into their effort) for TWICE or THREE times what NT sells for.
  • Tell me where exactly would you go to get a computer with AIX if it's so great where can I get it in the intermountain west California, Idaho, Montana, Utah, Arizona, etc? How much does it cost? If it's that good and dosn't cost me my first born I might try it.
  • It could be a tactically designed method to lull people into a belief that they really are running apache and then just come out and say that they faked it and they were just getting stuff from IIS and disguising it.
  • Well that just irks me just a tad too much (as in Shakespeare's speech "it moves me to stand"). You secretly want linux to fail. Nothing would please you more? Linux is the best thing that has happened to the world since sliced bread when it comes to choice. I have run into a number of people who are just plain evil with regard to how they interact with people. Discussions like these are the type of thing I have heard comming from people involved in sysadmining with almost infinite budgets and little reason to care what end users think or do. Freedom is not an option with them. The division between rich/poor is widened with such talk. However linux and the people associated with it will have the last laugh in the long run as it relates to who gets the best material. Feeble attempts to ruin my free ride in this world will fail. That's what it's all about: cheating the end user into accepting dictatorial rule.

    As far as real data is what does it prove? That is like saying that because communists (1980's era) won at a number of the olympic sports that makes communism as practiced in Russia and other Soviet bloc nations better than democracy and freedom? I think not.

    People have a right to be angry about products that fail for random reasons. What is really quite interesting is that NT is really and better at all for all these stupid tests it did against linux? If NT is soooooo good then why dosn't it just shine every time? Can the exp. of several hundred thousand people be wrong when they say that NT BSOD's a lot? That should stop and make people think what is wrong with this picture. Linux has no marketing team to attack other products at all.

    If evidence is any claim the people who are paranoid are the people near the top at microsoft. With such things as the Halloween letters and this benchmark it appears to be that *they* think linux is conspiring against *them*. Why would anyone release information that is near totally inflamatory and would give competitors to join in hands and go against microsoft en mass? It's a ploy. How would anything get leaked like this? Seems like people who would do that would be facing charges of industrial espionage and would be discharged wouldn't they?

    No people who use linux are just irritated about arrogance and lack of tact about facts. At least I am not posting AC? Is it because you are "afraid" to face facts or just afraid to get hate mail? I in fact don't care if I get any for what I think at all.
  • John Haggerty
    534 N Oakley St.
    Salt Lake City, Utah 84116
  • Interestingly why are you using nt for printing with vmware? Is it an office application like word?
  • Just be sure to have a more than adequate disk space unless this is just some network-router-like thing. I feel that the linux community has moved away from backwards compatability to just support all sorts of new and really cool stuff and ignores problems with fixing stuff that may not necessarily be the rage is still a problem. Basically I see it like this. NT is worried about getting all these new kick ass features out the door to impress people. Network downtime is expected and most people just nod sagely when the network goes down and think that such an act of god cannot be stopped. They moved away from 486's as soon as win95 appeared (they moved to pentiums remember they don't call the platform wintel for nothing)
    Linux then hits the scene and says that they want to FIX the problems that NT left out when it was trying to impress people. I just wish that all these $toonice features would be reserved for people who really want them and not for people who don't want/need them.
    A really nice feature that should be allowd in the "real" kernel is e2compr support. I have a really crappy hard drive that is just big enough to get what a normal sane person would need with some small ammount to squeeze out (about 340Mb) this is extremely limiting. I am faced with running a development kernel that isn't part of the standard distro because I have to get it recompiled. Sometimes I don't even have the space for the kernel so I start to get out of date currently I use 2.2.7 with an e2compr patch when the devel version is like 2.3.10 or so (haven't checked in a while)
  • by RISCy Business ( 27981 ) on Wednesday July 14, 1999 @08:30AM (#1802789) Homepage
    You know, I liked this article. It echoed what I've been saying for ages.

    Linux isn't for mission critical large-scale. NT *DEFINITELY* isn't. If you want that, you'd better call your IBM RS/6000 VAR today, because sales are going to jump now.

    Linux is not the be all and end all of unix. Period. It never ever will be, as it will more than likely collapse upon itself before we see ext3.

    I use Linux. I've used Linux for about 4 or 5 years. I think Linux is great.

    But it's not an enterprise OS. Period. Flat out. Never. It's good for small to midrange stuff, sure. Hell, our primary DNS server is Linux, as is our webserver currently. (It *will* be moved to an RS/6000 H70 as traffic increases.) Our secondary DNS server will be Linux. But our network monitoring system will be an RS/6000 43P-140 (aka Model 140) workstation running AIX. Why? Because if hardware starts failing in Linux, I'm screwed. AIX will scream bloody murder before it gets anywhere *near* the point of no return.

    Linux has no place in the enterprise as an ERP server. It has no place as your 1,000,000 hit/hour webserver interfacing with SQL and doing dynamic pages. Period. Those of you who find this offensive, kindly contact a proctologist so that you may have your head removed from where it is. That's the way it is. Don't like that? Go work towards changing it. Change is good. But till things change, what I say will hold true.

    -RISCy Business | Rabid System Administrator and BOFH
  • I just read their page at http://www.microsoft.co m/NTServer/web/news/msnw/Hotmail.asp [microsoft.com] and found that there was almost no information of substance! They didn't stint on the marketing information, though.
  • Please explain this sentence:

    "Linux right now doesn't scale horribly well to huge enterprise servers. But it still manages to replace them fairly effectively, by allowing scalability on the level of the machine instead of components."

    What do you mean by "scale horribly well" and what's the difference between scaling componentwise or machine-wise. Scalable means scalable. - period

    Can you be a little more technical? This sentence was hugely vague.

    (I don't agree with rating this comment as insightful)
  • Linux is approximately the same age as NT, 7 years. Give or take a couple of opinions.

    For the Hyperactive types among us:
    Linux was not created to compete with anything. It was not created to 'overthrow' Microsoft, or anything so egotistical. It was created by people who needed it to do 'stuff', and do it reliably and cheaply and well. It still is, as far as I can tell.

    Why all this talk about competition, Linux must beat NT, Linux is better, Linux this, Linux that? Why not just use it if it can do something for you, and not worry about what the rest of the world thinks?

    Linux doesn't need market share to survive, folks, and it doesn't need acceptance by the enterprise-level corporations. Its Cast of Thousands who maintain various aspects of it will continue to do so whether Linux can beat NT on every benchmark or not... they will keep Linux going primarily because THEY use it, not because YOU use it.

    Frankly, I think the gun was jumped in this 'race' with NT. Competition was created where none is necessary, or expected, or wanted.
  • From what I understand pgcc is based on a hack Intel made to gcc back about when the pentium came out. Lots of pgcc has been merged into egcs and I hear that Intel is working on Merced patches to egcs.
  • "Hopefully your words will convince IT managers everywhere to stop betting their bread and butter on mediocre solutions and people."

    With current labor shortages in the computer/ IT field right now, most companies are happy to have a warm body that can reboot a computer to work for them. That's why companies that provide extra support--consultants and Vendors-- can charge so much money for support.
  • Are you trying to imply the maddog is a lier? :p
    But seriously, probably the best people to conduct benchmarks are not people actively involved in one OS or another. Considering the fact that maddog is the executive director of Linux International and the senior leader of the UNIX Software Group at Compaq Corp his opinion, although I would trust, might be considered biased by other groups.
  • by oldzoot ( 60984 ) <morton.james@com ... inus threevowels> on Wednesday July 14, 1999 @07:39AM (#1802796)
    I, for one would like to see a set of benchmark results for Linux that would help a person to make decisions about the configuration of hardware platforms for systems. I would like to see a set of test results for different types of system activity, such as compiling code, raytracing/graphics/visualization, file system access, network bandwidth, combined network/filesystem access etc. This set of measurements could then be run on a variety of hardware types providing the basis of cost/performance decisions in the implementation of systems. One could answer questions like how does a K6/233 compare to a P5/233 ? How does a P5/233 compare to a P6/450 ? How much difference is there between an ULTRA ATA disk system and SCSI? Does the difference change with different processor speeds ? Does 1MB cache make a difference with what I want to do ?
    If implementing a cluster, does saving $2K per box make up for the difference ( providing money for more boxes ) if you use ATA and slightly slower CPUs, rather than higher end platforms ?

    I would love to see a single ( and evolving ) location for this kind of info. Hell, I would love to work on compiling it. I have seen some sites with benchmark info, but nothing that seems to try to answer specific questions. The sites that I have seen present specific objective numbers but it was hard to derive any context for the differences between systems.
  • Actually Microsoft explains situation with Hotmail here [microsoft.com]

    And homepages.msn.com is actually not Microsoft service but it's outsourced service ( don't remember who actually hosts it).
  • yada yada ...

    "plain evil"

    yada yada ...

    "division between rich/poor is widened"

    yada yada ...

    "free ride in this world"

    yada yada ...

    "dictatorial rule"

    yada yada ...

    "communists... Russia ... Soviet bloc ... democracy and freedom"

    yada yada ...

    "right to be angry"

    yada yada ...

    "stupid tests"

    yada yada ...

    "BSOD"

    yada yada ...

    "people who are paranoid"

    yada yada ...

    "industrial espionage"

    yada yada ...

    "arrogance and lack of tact about facts"

    yada yada ...

    "afraid to get hate mail?"

    Is it any wonder that no-one listens to you? You are full of crap buddy boy! :)

    When are you starting the terrorist bombing campaign, I bet you are an extremist right to lifer too.

  • So are you saying multiple network cards in a machine is "non-real-life"?

    Maybe in your uni lab and garage fella, but computers are used in more places than that dude.

    I think that you might be living in a "non-real-life scenario"!! :-)

    ROFLOL!!!

  • Ever heard of hardware failure or application failure? The world doesn't revolve solely around OS stability you know... (And if you think your hardware is stable, try configuring it with a few GB of RAM...)
  • Call me a cynic (after all, I am paranoid for Linux's sake), but MS is probably holding off on moving hotmail.com and homepages.msn.com to Win2000/IIS until the optimal marketing moment sometime during Win2000 rollout.

    "See, we've dropped Apache and are moving to our super-reliable, super-scalable Windows2000 with IIS. And we're now making that same power to your company for just $200!*"

    --LP

    "* Terms and conditions may vary according to the license purchased."
  • And that's why you are stuck using a substandard
    solution. If your company only cared enough to
    hire someone clueful. I usually define clueful
    as someone that isn't constantly getting paged
    out of bed at 3am because the mission critical
    application running on NT died when the new
    DOS attack out for NT that week on bugtraq just
    met your server. Don't get me wrong. Hopefully
    your words will convince IT managers everywhere
    to stop betting their bread and butter on
    mediocre solutions and people.
  • by AssKey ( 65520 ) on Wednesday July 14, 1999 @07:24AM (#1802804)
    I liked the fact that instead of reading an
    article about how messed up and skewed the
    benchmark was, I read an article that suggested
    things that can be done to better arm ourselves
    the next time something like this pops up.

    Good Job.
  • Who cares how Linux compares to NT? If I wanted NT, I would run it...I do sometimes, through VMWare. I use Linux because it's free, because it's open source, and because it's a good way for me to learn UNIX and programming in general. Linux is NOT going to be a Windows clone, nor is it going to achieve "world domination." I use Linux because I like hacking at it; I think it's great that Office suites and the like are beginning to be supported by it, but it's not a concern. If I wanted to use the best network server available, I'd use (and I do) FreeBSD...which Slashdot recently switched to, I believe (and which runs Yahoo!, M$'s own Hotmail, and pretty much every serious hardcore site you can think of). Linux IS NOT a windows killer; it is simply another (albeit WONDERFUL) alternative. If you think--and I hate Windoze bugginess too--that Linux is going to replace or defeat windows, you're F***ing nuts. Linux is a good, free way to learn UNIX and pragramming languages; now, with the support from HP etc, it is a syustem that people can actually get (Office-suite style) work done on. Nothing more. Am I concerned that NT beat it? NO. NT is generally crap--I run NT through VMWare for no other reason than it provides a convenient interface to /dev/lp0 for my school projects. If M$ came out with Office for linux, it'd be gone...but then they'd hopefully release Quicktime and MS media player for linux too :-)
  • here [slashdot.org]
  • People like Jon maddog Hall should be in charge of benchmarking. Then the world would be a better place.

  • Linux needs to become a good enterprise server. Firstly to run the enterprise applications (e.g Oracle) that are being ported to Linux.
    Secondly to run them better than NT and beat NT in enterprise computing benchmarks.

    The problem is that in order to achieve this, the linux gurus need to have access to enterprise hardware, which is *very* expensive. How many hackers can afford $10,000 for a development machine, let alone $100,000?

    There are two potential solutions to this. One is serious investment by successful linux distributors (e.g RedHat & SuSe), the other is support from the hardware manufacturers.

    This support could either be in the form of donations / loans of hardware, or by employing gurus to tune Linux for enterprise hardware.

    If this does not happen, then M$ will keep tuning NT for high-end hardware and we'll continue to see results like the Mindcraft ones.
  • Well take a look at this Anon Coward...

    http://www.heise.de/ct/english/99/13/186-1/
  • Window NT vs Linux sponsered by Microsoft...was anybody really surpirsed that NT won by such a large margin? You can't trust results like that when the sponsor happens to be the rival company.

    Another test was done recently to conifrm the results from the Mindcraft test...and although NT still won, the margin was significantly less. THe details of why Linux lucked out are a bit technical, however, the cause of the problems were immediately identified. And there is even a Beta patch for the problems available on the site; it came out a couple of days after the problem was identified. In addition the problems will be resolved in the next Kernal release.

    Just try and get that type of response out of M$, it takes them months to recognize that the problem exists, let alone come out with a fix...

  • Linux on E4000! Great! What's your experience
    about it? Is it scale well?

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...