Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Kernel Performance How Will 2.6 Measure Up? 214

An anonymous reader writes "This story offers some interesting performance comparisons between the latest stable Linux kernels (2.4.x) and the latest development Linux kernels (2.5.x), comparing performance on both a single processor and dual processors. These numbers help validate that the upcoming 2.6 kernel will outperform the current 2.4 kernel, at least in some instances..."
This discussion has been archived. No new comments can be posted.

Linux Kernel Performance How Will 2.6 Measure Up?

Comments Filter:
  • It's faster. (Score:2, Informative)

    by Anonymous Coward
    Yes, It's faster for most tasks, especially interactive and high load tasks.

    But how is this news? Ever since the thread on Kernel Notes a month or so ago, most of us have known it this.
  • by IamTheRealMike ( 537420 ) on Monday December 02, 2002 @08:04AM (#4792373)
    The thing I'm most looking forward to is the better scheduling under heavy disk load. This'll hopefully make Linux a lot more responsive when compiling software, at the moment my machine can get bogged down and jerky when doing this.

    Of course, the real solution would be to not need to compile software (plug plug :)

    • by sTeF ( 8952 ) on Monday December 02, 2002 @08:21AM (#4792424) Homepage Journal
      why don't you use hparm -X?? -d1 /dev/hd?
      that'll enable ata??/dma features. and everything will be much faster...
      • by Bios_Hakr ( 68586 ) <xptical@g3.14mail.com minus pi> on Monday December 02, 2002 @09:43AM (#4792763)
        I know Mandrake has done this for a while. I think RedHat does the same. I can't remember with Gentoo, but I did try some hdparm flags and didn't notice any real change.

        Basicly, do 'hdparm /dev/hd[x]' and look at the output. It will tell you which modes are in use for the current drive. Then do 'hdparm /dev/hd[x] -t' and see how fast your drive is running. Look at different optimize flags and test after each to find the best settings.

        You can even use it to test cdroms and RAID arrays. Just remember that when you optimize an array, you want to optimize each disk (/dev/hd[x], not /dev/md[x]) seperately, but test the array as a whole.

        One other note, the '-t' flag, like most synthetic tests, may not show the best settings for the drive. A lot of times a timed kernel compile (or my new fav test, a mozilla 1.0 compile) will reveal benifits, or detraction, not shown in a synthetic benchmark.
      • They are both already turned on but everything is STILL unresponsive under heavy disk load.
      • by chabotc ( 22496 ) <chabotc@gmailDEBIAN.com minus distro> on Monday December 02, 2002 @01:10PM (#4794288) Homepage
        What often helps a lot, is adding a -u1 (unmask irq). From the man page:
        "A setting of 1 permits the driver to unmask other interrupts during processing of a disk interrupt, which greatly improves Linux's responsiveness"

    • Autopackage sounds a lot like my pet project Linstaller. I stopped development a while back to get my CCNE and haven't restarted it since. One problem I ran into was what libraries you could expect to be installed on any given platform. Sure, there's the LSB, but does the LSB specify a base set of packages that make up a desktop or a server?

      My aim was a little different from yours though. I was going for complete binary packaging from beginning to end. No source building, as automated ./configure; make; make install;s tend to make distro specific code. Instead I left the cross distro compiling up to the packager. All I provided was an archive format and a self extracting gui or command line installer that totaled under 50k of overhead. I stopped around the middle of implementing the scripting language backend. I didn't like the way it was going, and as I said earlier, CCNE was calling to me.

      Maybe I should start it back up. It's not like I have much else going on lately. hmm...
      • by IamTheRealMike ( 537420 ) on Monday December 02, 2002 @09:28AM (#4792694)
        One problem I ran into was what libraries you could expect to be installed on any given platform. Sure, there's the LSB, but does the LSB specify a base set of packages that make up a desktop or a server?

        Nope, you're right, but autopackage can figure out what libraries are present and retrieve (assuming they've been packaged) the libraries from a DNS style distributed network, apt style.

        My aim was a little different from yours though. I was going for complete binary packaging from beginning to end. No source building, as automated ./configure; make; make install;s tend to make distro specific code.

        Hmmm, how did you get the impression that autopackage is source based? A .package is a binary package from end to end, the user doesn't need to compile anything.

        All I provided was an archive format and a self extracting gui or command line installer that totaled under 50k of overhead

        We're using a similar idea except the scripting language and front end code is external and installed-on-demand when you run a .package file if it's not already present to minimize package file bloat.

        Maybe I should start it back up. It's not like I have much else going on lately. hmm...

        If you're interested in the problem, please take a close look at autopackage first, feel free to hop onto IRC (freenode#autopackage) and talk to us first. We're normally around in the evenings GMT (both the core developers are in europe). It'd be a shame to duplicate effort when our projects sound so similar.

    • by Anonymous Coward
      Amazing, I've been running FreeBSD since 2.8 and I've never had an unresponsive system even while doing a build world; I guess the 2.4 kernel is alot worse than imagined.
      Hopefully it will be solved for you Linux guys in 2.6.
      • by pkplex ( 535744 ) on Monday December 02, 2002 @09:11AM (#4792603) Homepage
        Yep.

        I first tried FreeBSD about a month ago, and thats exactly what I noticed about FreeBSD. Smooooooooth.

        For example in Linux ( 2.2.x and 2.4.18+ ) I found that when something demanding was going on ( like building mozilla, kernel, or such like ), all X11 became all choppy ( Mouse stuttered, typing lagged in bursts ).

        Not so with FreeBSD. Many times ive had GnomeICQ hit a bug and use 100% cpu, but I was unaware of this until days later when looking at top.

        A few days ago I installed FreeBSD onto my p100, with 64Mb of ram. Playing around, I ran many many dnetc's by having thousands of 'nohup ./dnetc &' lines in a file and executing it.

        At a load of 350 the p100 box was still very happy to do what I told it, and with very suprising responsivness. However, once the load got up to 450, my ssh connection to the box was terminated, and I had to restart sshd locally. Which is fair enough, I guess.. one will run out of swap and ram sooner or a later.

        I can recall doing this same dnetc thiwith slackware, running 2.4.something, and after a short while at load 50, I started getting seg faults every time I ran a command.

        Untill Linux shurgs off huge loads effortlessly and in a stable manner like FreeBSD does, its not going to live in my boxen :)

        Tux needs to get fit and learn how to balance on one leg properly :)
        • OK, I have had it up to here with all this FreeBSD worship. After putting up with this for a long time from one of my good friends who happens to be a BSD bigot, I made the mistake of wiping out my Mandrake 8.2 and installing freeBSD on my box.

          After a few months of that, I am back back in Mandrake 9.0 with relief and no regrets. Why?

          1) I found that, for the things that I do, FreeBSD offered no advantages at all. Performance and stability was no better than Mandrake 8.2. In fact, under heavy loads, my experience is that Linux 2.4.x is much better. (I run lots of octave math simulations and lots of fortran number crunching programs, often several at a time. )

          2) For people used to working with linux, there are lots of annoyances to working with FreeBSD. I missed the convenience of RPMS. Many of my favorite programs did not compile properly.

          3) When pitch came to shove, my friend had no suggestions as to why the FreeBSD install did not perform as well as linux, except to tell me that I must be mistaken in how well the linux install performed! Duh!

          Now, maybe under some circumstances, it is probably true that FreeBSD does outperform linux. But I could not care less. For the work I do (mostly on the desktop, running simulations, running mozilla and xine), linux is demonstrably a better system than FreeBSD.

          Magnus.
          • OK, I have had it up to here with all this FreeBSD worship.

            Now you know what it feels like to wade through all this Linux sycophantry. Geez, reading Slashdot is like reading about how Linux can do anything including your laundry.

            Speaking of laundry, your laundry list of complaints boils down to one item:

            1) FreeBSD is different than Linux. Duh! First, optimize the system and applications for your compiler. Second, learn how to use ports instead of assuming that anything not RPMS must be bad. Third, who the hell needs performance during a system install?
        • What you describe definitely stinks of a mis-configured ide drive. As stated above, use hdparm (linux) to set the dma mode on your drive. This will greatly improve performance.
        • yes!

          It sucks, but that is your fault. no, hear me out, this isn't one of those "so fix it" rants:

          I've been building my own kernel since 1.2.13 days. I've [until recently] never built a crap kernel (sometimes left things out I wanted in, like sound, but it always worked smoothly, even under load).

          I recently tried to roll my own 2.4.{7,18} kernels under RH 7.2, and it did exactly what you describe. The slightest bit of IO concurrent with load would stutter up the entire system.

          However, redhat's kernels (based on the same version) would NOT have this problem. Smooth as astroglide on a banana peel.

          So the conclusion is that the kernel broke sometime during the 2.4 task-switcher/vm mapper debacle, but not in a "no longer works" sense, but rather in that deep wizardry is neeeded to build a "good" kernel. Obviously you and I forgot to check the "do not fuck up" box.(*)

          I would totally go for *BSD, but a clean RH install works ok, and I judge the overhead of applying updates to keep the system secure less than a complete OS shift and learning to administer a new not-quite-the-same system.

          (*)The first instance of the "do not fuck up" box was found on the LaserWriter driver for System 7.x: if you unchecked download fonts, then _nothing_ would come out, otherwise it worked like a charm. Likewise, acrobat reader has an "avoid Level-1 PS like the plague" option that allows you to print what appears on screen even when it contains greek letters.

          The problem is that each configuration dialog has this box hidden as something else, but it is always there, somewhere.
      • by marm ( 144733 ) on Monday December 02, 2002 @10:07AM (#4792909)

        Amazing, I've been running FreeBSD since 2.8 and I've never had an unresponsive system even while doing a build world; I guess the 2.4 kernel is alot worse than imagined.

        This is, by and large, the fault of the scheduler, largely unchanged in 10 years and described by Linus, even whilst he wrote it, as a 'hack'. However, it worked, and Linus, being the extremely sensible and conservative maintainer that he is, kept it until recently - process schedulers are difficult things to get right, and their performance is crucial to the performance of the kernel as a whole. Not to mention that for the tasks that Linux has been used for historically, primarily low-volume server tasks on low-end hardware, it isn't really a bottleneck.

        Still, the scheduler has been gutted and rewritten for 2.6 by Ingo Molnar - the now somewhat-famous O(1) scheduler, which performs much more fairly under load, and dispenses with almost all of the strange pauses and scheduling glitches under load. Current vendor kernels based on 2.4 (Red Hat's and SuSE's at least, I think) have had the O(1) scheduler backported to them as well. In fact, if you're running near enough any current 2.4 kernel other than mainline, you get the O(1) scheduler and your share of scheduling fairness.

        The new scheduler is also a fundamental basis for Linux 2.6's new NPTL 1:1 threading, which has so far proved spectacularly (record-breakingly?) fast. Hmm, on second thoughts, perhaps I probably shouldn't mention threads and FreeBSD in the same post. I mean, isn't this the same FreeBSD that's still waiting for a single half-decent pthread implementation? Oh well, better hope 5.0 is out soon...

        • Funny thing that you should mention the (0)1 scheduler - I think it's one of the best things out there. Both my boxes are SMP, with RH8 and a homebrew 2.4.19 kernel and the scheduler patch. A typical workload includes a few concurrent compile jobs, reading /. with mozilla, and enjoying some mp3's - all at once. The older box performs without a hitch - it's dual PPros with 1Mb caches and 128 Mb ram. It keeps up with my Dad's new P4 on XP for all the usual office-type apps and stuff. I haven't been able to get my head around the NPTL yet, though I'd love to get that going. Any good HOWTO-type links for NPTL? Thanks.
      • As much as I love Linux, I have to admit, I ran FreeBSD on a desktop for a while and it impressed the hell out of me. X11/Gnome and Mozilla ran at least 3x faster on the same machine, and under the heaviest loads it was still always responsive. It was the only system I ever trusted to burn a CD while I continued to do other things (under Windows and Linux, I tend to leave the box alone during burning).

        Of course back in the 2.2 days, when 2.4 was on its way, 2.4 was touted as being better able to recover from heavy loads. We ran several Linux web servers, and a bad CGI (or a good /.ing) was too easily able to cause a downward spiral, ending in an inevitable hard reset.

        2.4 improved that significantly, and even my home boxes noticed the difference. Another poster mentioned a new scheduler finally going into 2.6, so perhaps this will improve even further. Whether it will be as solid as FreeBSD or not, any improvement will still be great...
    • by cras ( 91254 )
      I've tried 2.5.48 and 2.5.49 and gave up both mostly because when compiling software Galeon got horribly slow especially with scrolling, MUCH worse than 2.4 kernels. Giving smaller priority to compiling job made it better but I'm too lazy to type the extra nice command before make..
    • by Uggy ( 99326 )
      Maybe you should try
      nice -n 19 make && nice -n 19 make install

      No responsive problems AND your computer uses all those spare cycles while you are doing other things.

      I do this all the time and desktop responsiveness doesn't bog down at all. Kernel doesn't take much longer to compile either.
  • Quick question (Score:5, Interesting)

    by Anonymous Coward on Monday December 02, 2002 @08:07AM (#4792379)
    What is the weakest specced machine that anyone here is getting productive/useful work with Linux done on? Do people use Linux on 468s at 12mhz? P75s? Just curious.
    • I'm using P133 w/ RH Linux 7.3. It works fine (very quick) unless compiling, and I even use X on it.
    • by Anonymous Coward on Monday December 02, 2002 @08:16AM (#4792412)
      What is the weakest specced machine that anyone here is getting productive/useful work with Linux done on? Do people use Linux on 468s at 12mhz? P75s? Just curious.

      I am currently using a dual Athlon MP 2400 system with 4 GB RAM and a 10-drive RAID setup. I find performance acceptable, although I am looking to upgrade from my circa 1983 Hercules MGA card sometime soon...while the onscreen text is clean and crisp, I have found the preview of Doom III to be virtually unplayable.
    • Re:Quick question (Score:1, Interesting)

      by Anonymous Coward
      i am runing redhat 7.0 with a vanilla kernel 2.4.19 on a hp vectra 486 at 66hmz.

      it is not the ultra fastest server in the world, but i use it as a router for my adsl/cable connection. and i am very happy with it.
    • A friend of mine once had SuSE 5.1 on a IBM PS/2 laptop with 6 MB RAM and a 80 MB harddisk (or so). That was in 1998, I think. He used it for C development, writing TeX (only writing! not compiling, elvis' preview function was enough for him) and playing text-based games. He even had networking, via SLIP. If the power supply unit hadn't got a loose contact, he maybe would still use it.
    • Well, I'm still using my overclocked 486SX/25 (now at 33 MHz) with 16 MB RAM everyday. It's running a distribution built from scratch (http://www.linuxfromscratch.org [linuxfromscratch.org]) and it has got all the speed I need.
    • 386 SX-16 with 4MB ram. Was used as a tiny samba-server.

    • Re:Quick question (Score:5, Insightful)

      by BadDoggie ( 145310 ) on Monday December 02, 2002 @08:26AM (#4792452) Homepage Journal
      486 or P75? Sure. Mail server. Firewall. File server. News server. Burner. CD library.

      Hell, a P75 works fine as a Windows NT4 PDC for a small network and can also handle low-to-medium file serving for around 20 users at the same time.

      Then there's the idea of using Linux network client stations, as in "How to create a Linux-based network of computers for peanuts [linuxworld.com]", to which this site linked more than a year ago. This system can even make use of 386s -- I've already tried it. True, performance is a bit slack, but just how much power do you really need to write documents? A network-based 386 (or one running Slack 2.x) with Abiword or maybe pico/vi/emacs (some people do actually like those) works just fine.

      woof.

    • I currently run Linux on some oddball Olivetti server with dual P100s. Works perfectly (so far, haven't tested the scsi tape drive yet.), even though I had to compile the kernel on my main machine. Would have taken 2 to 3 hours on one proc (SMP wasn't compiled in yet) which really isn't the most viable solution. Maybe I'll dig up a few 486, restore them, install some older stuff on it like the 2.0 or 2.2 kernel and try to sell them as cheap fileservers/webservers/routers?

    • Re:Quick question (Score:2, Interesting)

      by mvdw ( 613057 )

      I have on my network at home the following ancient hardware:

      • 486-DX 33 w/32MB RAM, running as dialup firewall for 2-3 users;
      • 386-DX40 (with 387 co-pro), 20MB RAM, running slackware 8.0 (kernel 2.2.19 I think, although to be honest I don't turn it on all that often). I use this machine basically as a scsi box to format external scsi disks for my HP PA/RISC adventures.

      I also have some other odd hardware on my network, that is so ancient that linux won't even run on it (2 DEC turbochannel machines, one MIPS-based and one Alpha, both running netBSD)...

      I acquired all this hardware when it was being thrown out by various people - bang per buck is essentially infinite

    • Re:Quick question (Score:3, Informative)

      by KjetilK ( 186133 )
      I remember using a 386 20MHz running X and Netscape and lots of stuff back in 1995, when I worked for the student union. It wasn't fast but we could work on it. Eventually, we whined about it and got a new box.
    • People don't just use Linux on desktops, they also use it on handhelds, wearables, and embedded systems. So, a 486 running at 12MHz isn't out of the question at all.
    • Well, we've got kind of a higher-end low-end system here, a dual P90 with 64MB EDO. works great as mail and firewall/masq server, and with about 10 users it can also handle some file serving/sharing, plus rsync backups, cvs and CD burning (4x, SCSI).
      Plus, with this crate I learned wha the "EISA Bus Configuration" in the linux kernel is for...

      I have also had a 486DX/2 100 for the same purpose, ran great too, but had to decommission it because it wouldn't take larger HDs.

      (On a side note, I was able to recycle the 30-pin SIMMs from the 486 by putting them into the cache expansion slots on the P90s EISA SCSI controller card. I miss the old times.)
    • I host my personal web server on a P120 running Linux 2.2 on some version of RH (doesn't really matter now with all of the other upgrades on the box). It only has 32megs of RAM, but it's happily serving web pages, a SSH server, a MySQL database, and all of the other services that I use from the inside of my network (telnet, ftp, atalk, etc).

      It's been running for years now without any problems -- it could probally use a bit more memory, but then again, swap space hasn't even been touched at this point.

      All in all, a very good box -- especially considering all of the power outages/brownouts that I've had the past few years. This box just keeps on going.
    • My weakest spec machine I regularly use is a P75 laptop. 16M memory, 750M disk.

      OS is Debian Woody.

      Desktop is XFree86/Ratpoison.

      Primary programs are ssh, w3m, centericq, and dillo. (Although I'm moving more towards 'View' in w3m, as well as email and newsgroups in mutt).

      Why? Because its nice to have a cheap sturdy laptop I can take anywhere without worrying too much about it.

      I also use a P166 at home for a fileserver/mailserver/newserver/sambaserver/firewa ll/dialoutserver.

    • Well, I'm using a P166 w/ 32 megs of RAM as a firewall and NAT. And I'm only running the shell, no GUI. With X, that system would barely run, but with just the shell, it's really fast.

      I found RAM to be the limiting factor, rather than CPU speed. Thirty two megs is the minimum Red Hat recomends for it's 7.X system (which is what I have). The kernel plus modules and buffers seem to take up most of the available memory (according to /proc/meminfo). So I wouldn't want to try it with much less. And for networking, you're much better off if you don't have to swap stuff out to disk just to route a packet.

      I think you could get it to work with 16 megs but less I'd be worried that performance would truly suck. Anyone know different?

    • Re:Quick question (Score:3, Informative)

      by RealAlaskan ( 576404 )
      What is the weakest specced machine that anyone here is getting productive/useful work with Linux done on?

      I have a 486 (75 MHz, I think) laptop, with 12 MB RAM. It runs Debian Woody with X11 and the Blackbox window manager. I was using it yesterday to read slashdot and debianplanet. I was using the Dillo browser. Performance was slow, but tolerable. If I want to use emacs, I kill X, or else it starts swapping.

      The real holdup for speed is the lack of RAM, rather than the anemic processor, but as long as I'm careful about how many processes I have going, it's fast enough. The holdup for usability is the small, 640x480 screen, and that's more a matter of it being an old laptop than being old.

      It's interesting to notice that this little box could run Win3.1, but isn't a speed demon with that either. Win3.1, of course, is old and single-tasking and insecure and nothing modern runs on it. Putting a modern, proprietary OS on this sort of old hardware is probably out of the question.

      Based on this, I think that any Pentium with a lot of RAM (more than 64 MB) would be just peachy for email/websurfing. Any Pentium with >32MB should be useable, if set up sensibly.

    • I have a 486DX-66 (33.18 bogomips, 24MB RAM) that I'm still using as a mail server. I used to also run 3 web sites and DNS on it, but since moved those to a different machine. (More a disk space issue than anything else.)

      I'm also running a SparcStation IPC (24.88 bogomips, 32MB RAM) as an internal web server and CD image server. (Running SuSE SPARC 7.3).

      Up until a few months ago my development machine at work was a P-166. (My main home machine is a dual CPU P3-550 with a half gig of RAM, and yes, I noticed the difference!)
    • I have a 486SX running X with FVWM on 20 megs of ram in one corner, a P75 with 32 megs (no video, keyboard or mouse), and a Compaq Deskpro 4000 with 96 megs and a Trident video card running RH6.2 with *lots* of updates. My backup server has dual PPros and a lot of disk since a decent tape drive is just too expensive. I standardized on kernel 2.4.19 with the (0)1 scheduler patch for all my boxes, and built it from tarballs for each specific CPU type on my main workstation. (Main workstation has dual P3 Coppermines at 1 Ghz and 1048 megs of ram.) All of these machines can do the usual linux office apps (NOT StarOffice -- it uses too many resources). You won't do too much multimedia on them tho, and a kernel compile takes 8 hours on the 486. That's why I have the new workstation mentioned above.
    • Depends on what you use Linux for. I've set up a packet-filtering router for a 3Mbps wireless link using a 386 computer and 8MB or RAM. The entire thing boots from a floppy, and runs everything from a ramdisk.

      Works fine. The hardware is more than adequate for shuffling a few Mbps from one nic to another while inspecting some ip-headers.

  • It may be faster... (Score:4, Interesting)

    by cperciva ( 102828 ) on Monday December 02, 2002 @08:14AM (#4792399) Homepage
    but will it corrupt my filesystem?

    Performance is important, certainly, but I think some people (*cough* overclockers *cough*) assign it a bit too much importance.
  • SMP (Score:4, Informative)

    by e8johan ( 605347 ) on Monday December 02, 2002 @08:15AM (#4792407) Homepage Journal

    It looks like the new kernel better utilizes multiple CPUs. This is a great thing. Linux needs better support for SMP systems if it is going to play with the big kids in the high-end server market. (I know, Linux is partially there).

    • SMP is overrated (Score:3, Interesting)

      by g4dget ( 579145 )
      Yes, there is some market segment that really swears by SMP. But beyond dual processor machines, the hardware cost and engineering complexity grow disproportionately with the number of processors. Most of the SMP market is driven by companies facing excessive software license costs for multiple machines, or by companies that can't figure out how to get their current architecture ported to a cheaper distributed system.

      When it comes down to it, you only get cost-effective scalability by using distributed systems or clustering. In fact, for really large systems, it's the only possible way at all.

      Something like OpenMosix [openmosix.org] should really be a standard part of the Linux kernel already, as should other support for simplifying clustering, distributed computing, communications, and distributed shared memory. Distributed systems and clustering is the future, not SMP.

      • SMP still has advantages if a number of processes needs to share lots of memory. For other, more suitable, problems, mosix rocks!
        • With Gigabit networks on the one hand, the cost of many-processor SMP systems, and cache effects, it isn't clear that there are any problems for which SMP is the most cost-effective choice to achieve a given level of performance.

          Mosix, of course, is no substitute for the kinds of problems in which many processes share a lot of state, but other approaches are (including distributed shared memory and various other communications libraries).

      • Your missing out:
        Latency, Mosix is just too un-responsive compaired to an SMP option.
        Time, The more people that buy SMP boxes the better they will get, the MHz wars and Windows killed of home SMP if Intel had invested in SMP design instead of MHz then there'd be cheeper cooler SMP machines out there.
        • Latency, Mosix is just too un-responsive compaired to an SMP option.

          Latency depends on the problem and the design of the distributed system. For problems where Mosix is an alternative to SMP, latency doesn't enter into the picture at all because Mosix processes are usually only loosely coupled and because network and disk I/O migrates with the processes.

          For problems where IPC latency is a performance concern, it can almost always be dealt with even in a distributed system through better design. The money you save on overpriced SMP designs more than lets you make up for any remaining performance losses from network latency (and I'm not convinced that with modern networking technologies, there are significant losses anyway).

          The more people that buy SMP boxes the better they will get,

          No, they won't. There is no magic tooth fairy of SMP, and you can't scale SMP indefinitely. If you put, say, 64 processors onto "the same memory", they aren't really on the same memory anymore--you are just paying for a very expensive box with a bunch of hamstrung processors inside that are essentially doing distributed shared memory over a special-purpose network.

          • "you can't scale SMP indefinitely."

            Well, your kind of wrong.
            X86 is crap for SMP, mainly because of the way memory and cach are handled. I wouldn't put more than 4 x86's in an SMP configuration.

            Other architectures support point to point busses and better cache handeling more memory chanels/controlers etc... you could probably scale to 100 or so processors with that type of architecture.

            Now, if some of the patents cray has were available then you can scale to thousands of processors, where memory controlers record pages that have been writen to and only sync on demand (when there's a read request page that's been writen to by another processor)

            Basicly x86 sucks for SMP (which is probably why it's such a cheep architecture).
            • Other architectures support point to point busses and better cache handeling more memory chanels/controlers etc... you could probably scale to 100 or so processors with that type of architecture.

              That's an illusion. If there is any significant per-processor caching going on, you basically have a distributed systems with a fast, non-standard network in between, and a costly, complex page fault handler hardwired in hardware.

              You can achieve the same thing more cheaply with distributed shared memory and a standard fast network. That is, instead of building lots of expensive, inflexible, special-purpose hardware, you treat the main memory of each machine as the "per-processor cache" and you do the synchronization in software over the network using page fault handlers. It simply makes no economic sense to put something that complicated and costly in hardware, in particular if the market for it is so small.

              Note that architectures like MIPS already do page fault handling in software, so those kinds of software-based approaches are competitive with hardware implementations.

      • by rikkus-x ( 526844 )
        SMP is not overrated for CPU intensive tasks.

        For example, I can do distributed compilation and get about 140% speed with 2 identical machines, but with SMP I can get more like 180%. It's cheaper to buy an SMP motherboard, 2 CPUs and some slightly more expensive RAM than a whole other machine.

        Dual Athlons are great, price/performance, for compiling large C++ projects (where g++ needs lots of CPU for each file.)

        Rik
      • When it comes down to it, you only get cost-effective scalability by using distributed systems or clustering. In fact, for really large systems, it's the only possible way at all.

        Three years ago you would have been right, but today the cheapest way to (nearly) double your computing power is to put in a dual processor board. I.e., the day of the home dual-processor has arrived. For example, you can now get a dual processor Athlon board [iwillusa.com] for $200, and in spite of what the docs say, you can put $50 processors in it instead of the $500 big brothers AMD recommends.

        It's only a matter of time before you start seeing 3D games that can take advantage of dual processor configurations. In fact, they already can in the sense that if a single-threaded game can load up one processor 100% and your box still remains entirely responsive for other applications. That is, you can play Return to Castle Wolfenstein at the same time you run a compile.

        • Or, when your one user that insists on running rtin on the server quits, and rtin spirals upward to 99% CPU usage, the machine is still usable by everyone else because rtin is only pegging one CPU. Replace rtin with any other program that does that (though the kernel is getting good at murdering rogue processes).

  • by Negatyfus ( 602326 ) on Monday December 02, 2002 @08:15AM (#4792409) Journal
    more resistant to the /. effect?
  • BSD? (Score:4, Interesting)

    by pkplex ( 535744 ) on Monday December 02, 2002 @08:32AM (#4792471) Homepage
    I wonder how well it compares to the BSD kernels, in both performance and stability?
  • by selderrr ( 523988 ) on Monday December 02, 2002 @08:32AM (#4792472) Journal
    These numbers help validate that the upcoming 2.6 kernel will outperform the current 2.4 kernel, at least in some instances...

    by consequence : These numbers help validate that the current 2.4 kernel will outperform the upcoming 2.6 kernel, at least in most instances...

    Sometimes I wonder if anyone ever thinks before posting. Especially editors. Comeon dudes, if you don't specify at least the importance and preferably the specs of those instances, you'd better stfu.
    • by Tim C ( 15259 ) on Monday December 02, 2002 @09:01AM (#4792564)
      by consequence : These numbers help validate that the current 2.4 kernel will outperform the upcoming 2.6 kernel, at least in most instances...


      No; you've forgotten the other possibility - that in the majority of instances, performance will be the same.

      Note that I've not read the article - I am merely correcting your interpretation of the sentence you quoted, given that it is apparently "insightful", despite being incomplete...

      Sometimes I wonder if anyone ever thinks before posting.

      Oh, the irony :-)
    • by doodleboy ( 263186 ) on Monday December 02, 2002 @09:04AM (#4792577)
      by consequence : These numbers help validate that the current 2.4 kernel will outperform the upcoming 2.6 kernel, at least in most instances...
      Actually, comparisons between 2.4 & 2.5 are meaningless, since the current development kernel will undoubtedly be very different from the 2.6 that ships next year.

      My _guess_ is that the 2.6 kernel will seem more responsive due to the pre-emptive kernel, etc., and that another six months of performance tuning will get 2.6 very close to, if not past, most 2.4 benchmarks. But the fact is, the linux kernel generally is mature enough that really big improvements are getting harder to come by.
  • by MosesJones ( 55544 ) on Monday December 02, 2002 @08:35AM (#4792481) Homepage

    So just to sumarise.. the newer version that have focused on SMP development and performance will be better than the old ones. I welcome the efforts of these people, they certainly know their stuff. But this isn't really a suprise. A suprise would be "2.6 to be slower than 2.4 for SMP" which would mean that someone somewhere has made a VM style error.

    There are lots of other advancements to talk about and performance has never really been something that was regarded as a weakness for Linux (except SMP), and as the hardware is so cheap and increasing do these increases really merit their effort ? They do because of the new structure and clean-up that is also resulting, but when 18 months pass and your processor speed doubles, why does an extra few percent from your OS actually matter.

    What matters is that this newer code is better quality than the older stuff, its better constructed while doing a harder task. This reduces the maintainance effort on Linux, which is a Good Thing(tm). It always seemed to me that the code would be faster, the great thing here is that the code also seems to be better.

    Kudos to all involved for proving that fast != unreadable.
    • why does an extra few percent from your OS actually matter

      This is not just about a few percent higher speed, this is about (much) better responsiveness under high loads. This is about the difference between being horribly slow or just plain dead when being slashdotted.

  • by BabyDave ( 575083 ) on Monday December 02, 2002 @08:37AM (#4792484)

    linux hacker 1: i'm bored.
    linux hacker 2: let's re-write the whole kernel!
    linux hacker 1: ok.

    *hackety-hack*

    linux hacker 1: wow, it's 0.00001% faster and takes up 1kb less space!
    linux hacker 2: w00t.

  • Measure up (Score:4, Funny)

    by RealBeanDip ( 26604 ) on Monday December 02, 2002 @08:59AM (#4792561)
    "Linux Kernel Performance How Will 2.6 Measure Up?"

    I can only imagine the spam that Linus gets...

    "Increase the length AND thickness of your kernel"
  • by ACK!! ( 10229 ) on Monday December 02, 2002 @09:47AM (#4792781) Journal
    The kernel speed is just one piece of such a large system performance puzzle that I hope people don't get too psyched hoping to see big improvements in terms of speed from their distro out of the box.

    Everything from file system speed to DMA to X to window managers to widget class speed right down to the applications themselves. You can have a damn fast core system and still have a system that feels slow especially on a desktop box.

    Take everything into consideration first. Personally, I would be happier if they locked the XFree86 guys, gtk, qt, Mozilla, OpenOffice, KDE and gnome guys in a damn room and not let them out till they came up with some serious ways to improve the GUI response overall for the total user experience.

    My SMP app servers will be happy though.
    _________________________________________ ________
    • Generally, I've had little to complain about with my Linux desktop box performance over the past few years.

      Hardware being as cheap as it is, there's little reason to be dissatisfied with Linux performance for 99% of desktop users.

      The 2.6 kernel enhancements seem to be heavily oriented towards increased performance on enterprise level servers. It's pretty much as if most kernel developers know that Linux desktop performance is not a burning issue.

      And with Linux gaming, a virtual non-market, the only way I see for interactive desktop issues to be further addressed in the Linux kernel is from the embedded area, where interactive response from for humans is still alive and well as an problem.

      When Linux desktop users start using demanding applications, such as video editing, there'll be more attention paid to performance. Maybe we'll even get "X12" or "Y" instead of X11R6.5.3.2:)

  • by mcbevin ( 450303 ) on Monday December 02, 2002 @09:54AM (#4792830) Homepage
    looking at the results I'm left a bit confused and not exactly impressed.

    in fact, if i count correctly, in 13 of the 23 tests 2.5 was slower than 2.4! it also didn't seem like the the margins where it was faster tended to be larger.

    looking at the second set of results, comparing SMP with UP, it seems that 2.5 does worse with SMP than 2.4, (for example, xtar_load is twice as slow in 2.5 under SMP). this is again the opposite of what everyone seems to be saying and what these tests are supposed to be proving!

    without some overall summary, its damn difficult to draw any meaningful conclusions, but my impression is that it appears no faster and in fact generally slower. no huge surprise in my opinion that linux is getting slower - look at windows' history - but why is it not possible to first generate some meaningful statistical conclusions from this data if you find it important enough to post on slashdot? might save on the hundreds of meaningless comments the article seems to have generated.
  • by Fefe ( 6964 ) on Monday December 02, 2002 @10:03AM (#4792881) Homepage
    The tasks where it has gained most are memory management, networking, the scheduler. I was able to handle 10000 connections with my fnord webserver [www.fefe.de] without noticeable slowdown on the box. Previous kernels would start to become unresponsive with lots of processes running.


    Also, establishing connections is much faster. Multi-process or multi-threading appears to be more efficient than poll now for a large number of connections.

    • Also, establishing connections is much faster. Multi-process or multi-threading appears to be more efficient than poll now for a large number of connections.

      poll() has got scaling issues - this is known. Check out /dev/epoll - this is coming in 2.6 and available as patches in 2.4. This will likely be faster than multithreaded techniques. Combining aio for read and epoll for write will probably be the fastest technique, but it remains to be seen.

      • You are right, epoll scales even better.
        However, it is also Linux specific, while poll and fork/exec are not. While it is a good sign to know that Linux scales well for applications that were written and optimized specifically for Linux, it is even more important that Linux scales well for portable POSIX/susv3 applications.
  • by IGnatius T Foobar ( 4328 ) on Monday December 02, 2002 @11:07AM (#4793300) Homepage Journal
    More scalability improvements -- Linux is just getting better and better at handling heavy loads. I'm looking forward to the next batch of improvements.

    But, you just know that when 2.6 comes out, ZDnet will be saying things like "Linux now supports SMP." (Except for those ZDnetters who have received this month's check from Microsoft; those folks will be talking about how Windows 2000 outperformed Linux 2.6 in an "independent" benchmark test.)
  • My question is simple.Being faster does not necessarilly mean that is better.I would expect the new kernel to be more versatile and tuned with less powerful platforms in mind(e.g. Strong arm or other embedded system solutions).Computers are nowadays smth like swiss knifes.In the future there will be specific hardware for specific tasks and linux kernel must be there! The need for a project like the GNU Hurd must alert linux kernel developers.
  • by Hoblin ( 629177 )
    I'm looking forward to the network failover and I/O failover features. The Device Manager looks pretty, too. But, when are they going to provide an friggin' LVM? Optimizations are great, but I want features, Damnit!
    • But, when are they going to provide an friggin' LVM?

      Err... Linux 2.4 has included Sistina's LVM for some time. 2.6 will have a more generalized kernel interface, the Device Mapper, that will allow both version 2 of the Sistina LVM, and the IBM alternative, EVMS, to be built on top. Or at least, that's what Linus seems to have decided on for the moment.

      The Device Manager looks pretty, too.

      I think that perhaps you are confused. Device manager? And a pretty one, at that? No LVM? Hmm, ok. Maybe you need some spelling help: L-i-n-u-x spells Linux, not Windows 2000. ;)

    • They are working on LVM, and EVMS (userland) is going to work with 2.5/6 like it does for 2.4. If you need more LVM than EVMS, you are crazy. Also, what network failover is new? 2.4 supports channel bonding (read Documentation/networking/bonding.txt) and also supports just about everything you could possibly imagine wrt networking via iptools (read the lartc howto).
      Features are the one thing Linux doesn't lack, its got so many tools that, outside of custom applications with extreme requirements, it can basically do anything. Oh, also, can't run freakish proprietary code from monopolists, but what can?
      In short, these guys are good, and there is so much under the hood that is really incredible (and has been there for so long....) that outside of vendors supporting their own hardware (like they do for Win) with drivers (GPL please) there is little that is necessary to add to the linux kernel. Not that I don't applaud them trying, or would I stand in the way of a cool hack.
      They are even trying to put a generic crash dump capability in that allows userland to dump kernel cores to:
      Filesystem of your choice
      Serial console
      Network device of your choice
      Other server
      etc etc
      Amazing stuff. Just freaking amazing. If you haven't looked over the capabilities of the kernel lately, you owe it to yourself to do so.

      andy
  • Irrelevant stats (Score:2, Insightful)

    by tuxlove ( 316502 )
    None of these stats seem to cover simulated heavy multiuser/multithread activity. That's what's key as far as I'm concerned. One of the major flaws in Linux today is the scheduling of user processes and file I/O (not sure about networking I/O, but it seems okay from simple observation). There are still severe process/thread starvation problems in the 2.4 kernel which are supposed to have been addressed in 2.5, but I've never seen a really good, real-world performance test to prove it. Until those problems are solved, Linux won't be useful for realtime server work other than web service.

    In case you're wondering, no, I'm not a troll. I've done *extensive* testing in this area. So have others, which is why they've been working hard on scheduling.

"The medium is the massage." -- Crazy Nigel

Working...