Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Kernel Benchmarking: 2.4 vs. 2.6-test 293

frooyo pastes from kerneltrap: "Cliff White recently posted some re-AIM multiuser benchmark results comparing the stable 2.4.23-pre5 kernel against the 2.6.0-test5 and 2.6.0-test5-mm4 development kernels. In his conclusion he makes reference to earlier scheduler tests posted by Mark Wong saying, "Short summary: we mostly rock.""
This discussion has been archived. No new comments can be posted.

Linux Kernel Benchmarking: 2.4 vs. 2.6-test

Comments Filter:
  • by account_deleted ( 4530225 ) on Thursday September 25, 2003 @11:28AM (#7055665)
    Comment removed based on user account deletion
    • by Anonymous Coward
      OT, but I'm pretty sure I've never seen "real world" and "instant messenging" in the same sentence. Except maybe with the accompanying phrase "no relation to".
    • by tomhudson ( 43916 )
      AIM (now at version 7) is not an instant messanger client. It's a benchmarking tool. Click on the link in the story to see what it is/does/etc.
  • SMP (Score:5, Interesting)

    by Doesn't_Comment_Code ( 692510 ) on Thursday September 25, 2003 @11:28AM (#7055672)
    The SMP code (written by Linux developers by the way) is supposed to be kicked up a notch in the new kernel. That's what I've heard anyway. I'd love to see Linux being the best OS for multiple CPU scaling.

    That will help everyone from the server market, to me when I save up enough for a two processor motherboard.
    • by caveat ( 26803 ) on Thursday September 25, 2003 @11:59AM (#7055962)
      I'd love to see Linux being the best OS for multiple CPU scaling.

      You do need a scalable OS to suport lots of processors, of course, but you also need hardware that scales too (clustering doesn't count). Example - SGI is using Linux with NUMAflex on the Altixes [sgi.com] to cluster 64-processor system images, but that kind of hardware isn't commodity in any way, and isn't going to be anytime soon.
      Anyway, Linux doesn't scale THAT well...as of 9/2000, SGI was using IRIX for a 1024-processor single-system-image supercomputer [sgi.com]; I've heard they can go to 2048 now, but I don't have anything to back that up. Dunno about Solaris, but I imagine it's pretty scalable as well.
    • Re:SMP (Score:4, Insightful)

      by Arker ( 91948 ) on Thursday September 25, 2003 @12:06PM (#7056019) Homepage

      I've got nothing against Linux improving at SMP in essence, but there is something very bad going on here it seems to me. Notice that while the new kernel 'kicks ass' on SMP systems, on uniprocessor systems the 2.4 kernel is the one kicking ass. Anyone benchmarked 2.4 against some of the pre-SMP kernels on a uniprocessor machine?

      Face it, the vast majority of users are uniprocessor, and kernel performance is more of an issue on lower-end machines. Improving performance on big multiprocessor boxes is fine by itself, but not when it harms uniprocessor performance. I'm not a kernel hacker, but I've read many people that this would not happen, that the SMP code would not hurt performance on a uniprocessor machine when the kernel is compiled without it, but that's obviously not turning out to be the case. Anecdotal evidence, at least, suggests that this performance degradation has actually been going on for quite some time, at least back to when SMP code first started being added.

      I'm not sure what all the factors here are, so naturally I'm not going to tell you the solution, but it certainly looks like a potential problem that should be discussed. Hopefully someone with more specifics than I have can chime in...

      • Re:SMP (Score:4, Informative)

        by blakestah ( 91866 ) <blakestah@gmail.com> on Thursday September 25, 2003 @01:34PM (#7056819) Homepage
        Notice that while the new kernel 'kicks ass' on SMP systems, on uniprocessor systems the 2.4 kernel is the one kicking ass. Anyone benchmarked 2.4 against some of the pre-SMP kernels on a uniprocessor machine?

        Yeah, they missed an important test - latency for interactive processes. A lot of scheduler work went into improving this, and it makes a huge difference when you have large memory processes working hard.

        This aspect is improved across the board in 2.6, as well as the SMP issues. Sure, the uniprocessor machine may be a little slower, but response latencies in X are a lot better, and this makes more of a difference to users.
        • Re:SMP (Score:3, Interesting)

          by Arker ( 91948 )

          This aspect is improved across the board in 2.6, as well as the SMP issues. Sure, the uniprocessor machine may be a little slower, but response latencies in X are a lot better, and this makes more of a difference to users.

          I beg to differ. I couldn't care less about X, I haven't even used it in over a year. Like the vast majority of Linux boxes, mine runs in console mode only, and on a single processor. I, and the rest of the world that uses it like this, find it hard to see anything to get excited about i

          • If earlier versions of Linux are good enough for your old 386-25, why not keep using it? As Linux scales up to bigger and better machines, there is going to be some cost at the low end. Its a matter of priorties. I, for one, use Linux as my main desktop OS. I care a whole lot about the improvements in 2.6.
      • Re:SMP (Score:2, Funny)

        by pmz ( 462998 )
        I've got nothing against Linux improving at SMP in essence, but there is something very bad going on here it seems to me.

        I don't understand...scalability to hundreds of CPUs will provide much penis enhancement for geeks everywhere (even the ladies).
      • Re:SMP (Score:4, Insightful)

        by Vellmont ( 569020 ) on Thursday September 25, 2003 @02:20PM (#7057221) Homepage
        Well, I don't think you can conclude that the SMP changes to the kernel are what's slowing down the 2.5 Uniprocessor performance vs 2.4 kernels. There are many other changes that took place (low latency and an improved scheduler come to mind) that aren't SMP related.

        Obviously the SMP performance has been improved, and there was a lot of potential for improvements looking at the 8x test. Another way to interpret the results would be to say that the other changes decreased performance across the board on SMP and Uniprocessor systems. The SMP improvements in SMP machines more than made up for this added cost and improved raw performance on SMP machines.

        Hopefully the performance loss on Uniprocessor machines can be decreased or eliminated. Even if it's not, I think you need to remember that raw performance isn't the be-all-end-all thing that's important. 7% is pretty small in the grand scheme of things where processing speed is doubling every 18 months. Responsiveness and better scheduling that doesn't starve processes is more important than a 7% performance decrease IMO, and you don't get that from faster processors.
      • Well, why not support SMP by default? After all, Intel is going that direction with their uni-CPUs -- making them appear as two. Perhaps that is their permanent direction.

        If so, then shouldn't everything be written assuming it will run on two processors, since that's all we'll have in a few years?
    • No, because that's OpenBSD. Duh.
      Oh wait...
  • GREAT (Score:5, Funny)

    by proj_2501 ( 78149 ) <mkb@ele.uri.edu> on Thursday September 25, 2003 @11:31AM (#7055700) Journal
    now i need another CPU to increase performance!
    • Re:GREAT (Score:2, Funny)

      by Anonymous Coward
      In Soviet Russia, we had to increase performance in order to get another CPU ...
      • Re:GREAT (Score:2, Interesting)

        by stevezero ( 620090 )
        ok, that was funny. I usually get blase(insert accent aigu) about Soviet Russia jokes, but that one made me laugh, sir/madam/beast/cronjob.
      • Oh, just like how the No Child Left Behind Act cuts the funding of schools who need more! EXCELLENT!
  • novel idea. (Score:4, Funny)

    by justin_w_hall ( 188568 ) on Thursday September 25, 2003 @11:31AM (#7055704) Homepage
    Go figure. An OS that gets faster with each version.
    • Re:novel idea. (Score:3, Interesting)

      by stratjakt ( 596332 )
      It's only faster if you have 8 CPUs, your single proc desktop box will be slower.

      Which just reaffirms my belief that linux is becoming ever more firmly planted in the server world, and desktop linux is still just a hobby for the most part.
      • Re:novel idea. (Score:2, Insightful)

        by norton_I ( 64015 )
        2.6 is supposed to be fully preemptable, which should make lots of latencies decrease, leading to better interactive performance on a desktop, even if the overall throughput is lower. What this benchmark shows is that linux kernel 2.6 is a slower uniprocesser server than 2.4. While that is too bad, it doesn't really say much about desktop linux. I just installed 2.6-test5 on my (2 cpu) desktop, but haven't really had time to evaluate its performance relative to 2.4.
      • Ah, but what of my dual processors with HT? That's four logical cores. As more and more commodity CPUs start to use stuff like HT, performance of OSes written for MP will increase on the desktop. Anyway, I'm willing to take a little bit of a speed hit if it brings sufficiently valuable features with it.
      • Re:novel idea. (Score:5, Insightful)

        by GooberToo ( 74388 ) on Thursday September 25, 2003 @01:18PM (#7056682)
        That statement simply is not true. Granted, you can always find some corner case where the workload is going to be slower between releases (2.x or 2.6), however, as a rule of thumb, 2.6 should still be a huge improvement for even uniprocessor users. Best yet, many, many parameters of the kernel and scheduler are tunable, so, you can always adopt the kernel to work best for your specific workload needs.

        While it's true that they are working hard to significantly improve Linux for the server room, by far, they have never lost site of the uniprocessor user. Remember, there is nothing wrong with tuning the kernel for your uniprocessor needs, and specific workloads. They just can't do that when they are benchmarking because it would skew the results, invalidating them. They are not only trying to measure how their improvements effect the overall system, but, what makes for sane initial defaults, which are reflective of a general purpose and broad workload. If you understand what you are doing, there is not a reason to believe that you can't greatly improve things for your specific uses and workloads. It's important to keep all of these in mind when talking about these benchmarks. Furthermore, you should fully expect your favorite distro to come with tuning presents which reflect a targeted workload (file/print server, workstation, database, web server, etc.).

        Keep in mind that the benchmark you looked at represents one category of many different types of workloads. So, for that specific workload, it may of been slower, however, that workload my not represent anything you do with your computer. Remember, other types of workloads are significantly faster. One last note, remember, performance is the classic trade off with lower latencies. It trades responsiveness for raw throughput. If, on a uniprocessor workstation, you only see a -7% drop in performance and latency is greatly reduced, chances are, not only will you never notice the loss in performance, but you'll be praising it for how well it works with your mouse, monitor and keyboard (if feels better and makes you a happier user).

        Just some food for thought.

      • Re:novel idea. (Score:4, Insightful)

        by Karn ( 172441 ) on Thursday September 25, 2003 @01:46PM (#7056943)
        Wrong.

        One benchmark used for Linux kernels is hammering a system while playing an mp3 to see if they can get it to skip. Low latency is mostly a desktop feature, and the 2.6 kernel is going to have much improved latency.

        Other portions of Linux have changed, and may not initially outperform 2.4, but if you think this kernel isn't going to be a dramatic improvement over 2.4 for desktop users and servers, and if you think the kernel developers aren't taking the desktop into consideration, you are mistaken.
      • Re:novel idea. (Score:3, Informative)

        by be-fan ( 61476 )
        Desktop Linux kicks ass. With 2.6, interactivity on an unloaded system is close to WinXP, and on a heavily loaded one (the steady state of my machine :) kicks XP's ass all over the place.
    • That's Apple's specialty -- introducing new hardware (PowerPC, AltiVec) and gradually catching the OS up to it. The early PPC system (System 7.5?) had tons of emulated 68k code in it that was gradually removed with each update and replaced with native PPC code.
  • woo (Score:5, Funny)

    by grub ( 11606 ) <slashdot@grub.net> on Thursday September 25, 2003 @11:32AM (#7055711) Homepage Journal

    If you thought SCO was mad over 2.4, just wait until they make up evidence for the 2.6 kernel!
    • SCO Kernels (Score:5, Funny)

      by Schwartzboy ( 653985 ) on Thursday September 25, 2003 @12:18PM (#7056126)
      No, no, no! They don't have to "make up" a shred of evidence, you insensitive clod! Bear with me as I walk you through the intensive fact-finding process that will prove beyond a shadow of a doubt that 2.6 does, in fact, have more proprietary SCO stuff in it than any *nix ever has before! Watch as the scene unfolds...

      DARL: So, um, hey. It looks like there's this new "too-pointe-six colonel" out on the market from those Lenn-ucks people. We own all that too, right?

      SUIT: Well, sir, it's like this. Do you remember how the 2.4 kernel had all of those lines of code in them that are ours, even though they showed up in textbooks before most of our stuff existed?

      DARL: Sure, but how does that help us with this new thing?

      SUIT: Think about it. Most operating systems, according to my extensive research during years of never having looked at a computer before, contain the same code that they always did, plus a couple of lines of new comments and an extra variable or two that shows how much you're able to charge users for the new features. Just think about the Windows 95 and 98 thing. Perfect example there.

      DARL: But...my mansion only has 93 windows. Where is this heading?

      SUIT: *blinks* Errr...yeah. Well, it's all the same code, and even those sneaky Linux commies try to pull a fast one on us and put one of those different codes in there, we can always assert our ownership of these "opened sources" files that I just printed out. I asked this guy, you know, and he said that all of these sources are what's in Linux, and since I printed it on paper and stuff, I figure it must be a textbook. Since we own all the words that show up in textbooks, and this has a lot of words, I think we've found ourselves a new angle here.

      DARL: Smithers, cry havoc and let slip the Lenn-ucks colonel lawsuit monkeys once more!


      I do so hate having to correct you people. *sigh*
  • by Visaris ( 553352 )
    Not to be a n00b, but I can't make too much sense of the benchmark the story linked to. Could anyone give a short simple little explanation of what it means? Thanks so much!
    • There has to be a happy medium between this review's tables of figures and Tom's Hardware's chrome graphs.
    • by NtroP ( 649992 ) on Thursday September 25, 2003 @11:58AM (#7055958)

      Not to be a n00b, but I can't make too much sense of the benchmark the story linked to

      You actually READ the article?!? Man! You ARE a N00b!
      • heh. A response to a "n00b" by someone who's even "newer" to /.

        • Ah, but "newer" does not necessarily mean n00ber. You see, NtroP, with his punny nickname, nonsensical but plausible sounding sig, and use of all-caps and exclamation points, has obviously studied harder to become one of the Slashdot 31337. He has teh skillz. However, we can see that Visaris completes full sentences and actually read the article. Clearly a n00b.
    • The workload simulates a multi-user system by running an increasing number of users. Each user does a list of tasks. We keep adding users, until the load reaches a max. The score shows tasks per minute, and peak user count. Bigger is better. http://www.osdl.org/stp
  • timeline? (Score:2, Insightful)

    by NumLk ( 709027 )
    Seriously, its great and all, but when will it be ready for the masses? I.e. the holy 2.6 release? For us, loading a beta (or even alpha) kernel is something that we can do in our sleep, but look at it from this perspective: all of these improvements will only really make an impact once developers can write applications specific for this environment, which requires, at a minimum, an official release.
    • Are you implying that developers are not designing for the environment untill its out of beta?

      I can only think of 2 justifiable reasons for this:
      1) Developers can't figure out how to install a beta or alpha kernel..
      2) Developers dont trust it enough to belive that code written for a beta will work on an official release.
  • Rock? (Score:5, Insightful)

    by TheLink ( 130905 ) on Thursday September 25, 2003 @11:33AM (#7055728) Journal
    It's only significantly faster if you have 8 processors.

    Whereas it is 7% slower if you have one processor.

    I suppose they'll have uniprocessor version which runs faster? Lots of people have uniprocessor pcs.

    Hyperthreading doesn't really count.
    • Re:Rock? (Score:2, Interesting)

      by kasparov ( 105041 ) *
      Where did you see that it is 7% slower with one proc? What is 7% slower than what with one processor? Not trying to disagree with you or anything, I just didn't notice anything in the article and was hoping for link.
    • Hyperthreading doesnt really count? I didn't see the benchmark that you derrived that little gem from. Any hard data?
    • I suppose they'll have uniprocessor version which runs faster?

      I'm trying not to read too much into this benchmark. The new kernel preemption in 2.6 will make Linux "feel" faster even though it may be slower given a long running continuous task to chew on.

      To counter balance that, I'm assuming that the focus right now is on stability rather than optimization. I'd hope that any performance gap with the 2.4 series would be closed shortly after the 2.6.0 release. What was the situtation like between 2.2
    • Half rock (Score:3, Insightful)

      by roystgnr ( 4015 )
      The "We mostly rock" statement was referring to a different benchmark (the one in the story's second link), in which the scheduler performance on single processor machines more than tripled (and performance on 8-way machines went up ~50%) between 2.5.30 and 2.6.0-test5. The first link's benchmark isn't very impressive, like you point out, but it's also not the same program.
      • Re:Half rock (Score:5, Insightful)

        by GooberToo ( 74388 ) on Thursday September 25, 2003 @12:56PM (#7056442)
        You are correct! The scheduler reacts different to different work loads. This is why the kernel developers try hard to test their changes under a number of different workloads. To top it off with, they attempt to target the benchmarks which behave like real-world work loads rather than contrived and unrealistic workloads. That's not to say that they don't test those too, however, they clearly direct more attention at real-world workloads and corrosponding result sets.

        The 2.6x series kernels will be a big step up for just about everyone that seriously uses their computer. Significant realiability improvements as well as faster thoughput on disks, much, much higher scalability for SMP (hyperthreading and numa and even highly loaded uni-systems) systems, and much lower latencies, all at the same time. Granted, there are still some tests which may not be a win-win all the way around, however, almost everything in general is an improvement with hardly any detracters.

        So, saying, "we mostly rock", really is a true statement!

    • Linus' tactic of not doing much to improve multiprocessing support hurt the high end, at the dubious claim that it had to hurt the low end.

      I've thought for a long time that you want different schedulers available for different scales of multiprocessing. Heck, even Windows has different "drivers" for uniprocessor and dual processor machines for W2K Pro.
      • Linux, more or less does this too. When you compile out SMP support, it effectively changes the resulting code so as to no longer be the same beast. In doing so, it is supposed to save a lot of overhead. I didn't notice if the benchmark results were compiled with SMP support or not, when run on a uniprocessor.

        Anyone notice?

    • My understanding is that they've designed 2.6 with specific interest and attention spent on speeding up the kernel for desktop users. They've decreased latency and such, so that 7% slower on a server-related task will likely not translate over to desktop use.
  • User Experience (Score:5, Informative)

    by the_crowbar ( 149535 ) on Thursday September 25, 2003 @11:34AM (#7055731)
    I run 2.4.22 at work and 2.6.0-testX at home. The 2.6.0test(vanilla) series feel much more responsive, especially in X. I have not done any real benchmarks of my systems, but after working with 2.4 all day 2.6 seems to fly.

    Just my observation
    -the_crowbar
    • But 2.4.20 /w Con Kolivas's patch set(/w 1000hz tickrate) is much more responsive than 2.5.75.

      There's very little window shaking and mouse slowdowns. Still, its not resposive as (*gasp*) Windows, but its getting there.

      Hopefully, 2.6.0-testx will get better.
    • Agree. I'm running 2.6.0-test5 on a dual Athlon 2400+ system and it flies compared to a 2.4 kernel. I started running development kernels with 2.5.22 and have been very pleasantly surprised by how stable the development kernels have been.
    • So - after working for 8 hours a day in your day job, you come home to a relaxing evening of free time where it seems to just FLY by. I'm not entirely sure this experience can be directly compared, perhaps if you could try 2.6 during the (drudging?) hours of your workday? :)
  • Better comparision (Score:3, Informative)

    by ajiva ( 156759 ) on Thursday September 25, 2003 @11:34AM (#7055739)
    A better comparision would have been against Solaris x86. Solaris scales very linearly with every added processor.
    • Jonathan Schwartz? Is that you? ;)
    • Are you sure?!? Solaris / SPARC does, but I thought that Solaris x86 was as totally different beast.

      When we did some speed tests on Solaris x86 vs the early 2.4.x kernels about 2 uears ago (a while granted), Solaris x86 was a DOG. They may be trying to clean it up now, but even then they admitted it was basically an afterthought.

      I've seen a TON of different hardware / OS configs, but I know of only one shop who used Solaris x86. They used in dual CPU machines only.

      Maybe I'm wrong, but my impression

  • 2.4 vs 2.6 (Score:3, Interesting)

    by Doesn't_Comment_Code ( 692510 ) on Thursday September 25, 2003 @11:40AM (#7055786)
    I assume that when they say the 2.4 Kernel outperforms the 2.6 on a uniprocessor computer, but not on a multi processor computer, that they have recompiled the kernel for each hardware environment.

    This struck me as strange, because when the kernel is compiled without SMP support, all that code is left out. So it doesn't seem like the 2.4 should outperform the 2.6 on one cpu.

    Does anyone know why this might be?
    • heh, it looks like 2.6 has great new features for all types of machines, but poorer overall performance for the normal desktop PC. The price of being scalable?
      • Perhaps the low-latency fixes add overhead. 2.6 might feel faster
        because of this but might actually run slower.
        • Re:2.4 vs 2.6 (Score:3, Informative)

          by tmasssey ( 546878 )
          By definition, with the speed of context switches and other overhead the same, a system with "low-latency" switching (switching faster between interactive jobs) will be slower. It switches more often, therefore wasting more cycles with switching overhead.

          Of course, there is the possibility of trimming cycles from the process of switching contexts. Linux, though, already had that pretty low. That's why Linus is so resistant to shared-memory, shared-context threads: the cost of processes is so low that

        • easy enough to check, disable the preemptive code and do a new benchmark

          my own experience with test5 shows me that it has issues with memory management/swap. the machine was completly unuseable when all the memory was used up
      • The tuning being done for 2.6 is primarily based on reducing the latency of interactive tasks in the face of intensive background tasks. That is, your normal desktop PC will be a better desktop at the cost of being a worse workstation (your windows move smoothly, your keyboard and mouse respond instantly, your music doesn't skip, but your compiler may take longer).

        This is largely due to people finding way to measure the sorts of performance you actually care about for desktops. This is at the expense of ot
  • Thanks SCO. (Score:5, Funny)

    by EDA Wizard ( 2225 ) on Thursday September 25, 2003 @11:40AM (#7055792)
    Looks like that 1970's UNIX code really increases performance for SMP P-III's.

    Now we can appriciate the forsite that our Unix fathers had when developing Xeon SMP code in the late 1970's.
  • I'm a bit leery. (Score:5, Interesting)

    by devphaeton ( 695736 ) on Thursday September 25, 2003 @11:41AM (#7055797)
    "the general trend in the metric indicates everything has been improving, so I think we rock."

    For some reason, the scheduling seems to get more and more choppy (in that i've noticed) with every iteration of 2.4.x kernel. Currently i'm on 2.4.22, and while i don't have any specific tests, numbers or statistics i'm noticing some issues.

    Easiest way to reproduce it is to have the machine do something cpu intensive, such as mkisofs, cdrecord, bzip2 some huge file, cp anything large, installing (via aptitude) or even the "Reading Package Lists...." stage of apt-get update.

    Oftentimes, the machine will become unresponsive for about 3 seconds at a time, then jolt back up to speed, then pause for 3, on and on. Even after the command line returns the prompt, or gkrellm's cpu and proc krells show that everything is all done, i will still see lag in responses from the kb, mouse, or whatnot off and on for about 10-15 seconds.

    I've gone over my kernel config and tweaked a few things here and there but with no change. I can back down to a 2.4.18 kernel and it's not as bad. Going down to a 2.2.x kernel completely solves the problem, but of course will bring its own issues with some of my newer packages (such as gcc) and a few pieces of newer hardware.

    A friend of mine and I have gone over this (on my machine and his) and he experiences a lot of the same issues i do.

    Mind you, i'm not complaining. I'm very grateful to all the developers of the world that i even *have* a linux system to run. But this is something that makes me more excited about the kernel 2.6.x series. I haven't tried one out yet, but from what i've heard and read, it should be awesoe. :o)
    • Re:I'm a bit leery. (Score:5, Informative)

      by blonde rser ( 253047 ) on Thursday September 25, 2003 @12:08PM (#7056029) Homepage
      Since it seems your running debian and all those cpu intensive operations are also hd intensive operations have you checked hdparm -d /dev/hda . I know it is simple but it is so simple that I forgot to check for about a month. Debian appears to have dma off by default.
      • Since it seems your running debian and all those cpu intensive operations are also hd intensive operations have you checked hdparm -d /dev/hda . I know it is simple but it is so simple that I forgot to check for about a month. Debian appears to have dma off by default.

        Yep... the base install is Debian Testing, and everything else was pulled out of "Unstable". I have enabled DMA, and it actually did make what appeared to be a slight difference.

        However, other things that don't really use the hdd much (i.
    • by Dr. Zowie ( 109983 ) * <slashdot@@@deforest...org> on Thursday September 25, 2003 @12:08PM (#7056039)
      Devphaeton, you hit the nail on the head about 2.6.0. Its main advantage over 2.4.x (for this luser anyway) is the smoother multitasking even on a uniprocessor system. I'm running a tweaked 2.6.0-test5 on my laptop, and jobs that would make 2.4.x unusable are barely detectable (from the standpoint of moving the mouse around, typing up slashdot articles, and the like).

      Of course, the ACPI support and swsusp doesn't hurt either :-)
      • I'm anxious to move to 2.6 but I have to wait for Netlock (VPN) and Netraverse (Win98 -> Webex) to release 2.6.0 patches for their wares. I've pleaded for beta, alpha whatever patches and consistently get the "it's not production so take off" line. I'll be all over those two like stink on a dog the day 2.6.0 goes stable.
    • Um, there have been a number of improvements to the scheduler which specifically address this sort of issue. Perfect, it won't be, but it will be a lot better than the 2.4 series kernel.

      What I wouls really like is for Andrew Morton's MM patches to be adopted. They bring in proper asynchronous I/O support and that will eventually dramatically help response times (however only as utilities get rewritten to adopt this). However. the improvements to X responsiveness seem to be here now.

  • Architeture (Score:3, Interesting)

    by HogGeek ( 456673 ) on Thursday September 25, 2003 @11:42AM (#7055815)
    I didn't see anything in the articles to support this, but I'm assuming this is based on x86 architecture. Has 2.6 been ported to other architectures? And if so, have these AIM tests been run ?
  • After all these years since I first tried to dial in to a Microsoft network I still can't do it without first compiling my own kernel and pppd! I'm just a bit annoyed as I'm sitting here watching my Debian Unstable kernel recompile. For one change: added CONFIG_PPP_MPPE=m. This is a frustrating waste of time! Will this be built in the 2.6 kernels, or do I have to hope that somebody comes up with a better implementation (in Debian non-free perhaps) for this?
    • After all these years since I first tried to dial in to a Microsoft network I still can't do it without first compiling my own kernel and pppd! I'm just a bit annoyed as I'm sitting here watching my Debian Unstable kernel recompile. For one change: added CONFIG_PPP_MPPE=m. This is a frustrating waste of time! Will this be built in the 2.6 kernels, or do I have to hope that somebody comes up with a better implementation (in Debian non-free perhaps) for this?

      how many users would benefit from having that as
      • Errr, how many users use half the things that get built by default? That's why they're built as modules.

        Strewth, I am getting on with life, which is why I get annoyed having to stop and carry out pointless tasks like this. I want to update in one go without having to constantly go through this rigamorale. I don't want to have to do this on every Linux distribution I install or encounter. I don't want to have to do this whenever I create a clean install to test something quickly. Maybe you don't have m
  • How 'bout benchmarks with new versions of X?

    I keep hoping for faster and smaller....

    mark "silly me"
  • by AntiGenX ( 589768 ) on Thursday September 25, 2003 @12:19PM (#7056131)
    If you look at the difference between the outcomes for uniprocessor vs dual. There doesn't seem to be very good scaling.

    linux-2.6.0-test5 - 992.06 - Uni
    linux-2.6.0-test5 - 1017.43 - Dual
    linux-2.6.0-test5 - 5406.68 - Quad

    Does this mean that you only gain 3.49% when adding a 2nd processor? Obviously I don't expect things to scale linear but 3%!? Am I missing something here? And then 81.65% for quad? I'm not trolling, I'm looking for someone to explain what I'm missing.

    • by rakarnik ( 180132 ) on Thursday September 25, 2003 @01:26PM (#7056754) Homepage Journal

      Yes, the number for dual is not 1017, but more like 1545.

      Here are the actual numbers for 2.6.0-test5 and the compute workload:
      1 - 992.06 - 100%
      2 - 1545.03 - 155%
      4 - 5175.28 - 521%

      Now for why the 4 processor case is actually 5 times better than the single CPU case, I do not know enough about the benchmarks to comment.

    • Not all workloads have to scale linearly. It also depends on how the benchmark was configured. Remember, there are many factors which effect scalability, not to mention many subsystems. I don't recall off the top of my head exactly what the benchmark was trying to measure. For example, it could of been trying to measure scalability with high memory contention, or any number of odd cases. In other words, unless the test was specificaly trying to measure CPU scalability, you shouldn't be attempting to
  • by skamp ( 559446 ) on Thursday September 25, 2003 @12:47PM (#7056365)
    I've tried the new kernel, and I got more responsiveness issues than improvements. But besides that (I might very well have misconfigured something), I'd like to point out that the kernel itself isn't all that matters: the new drivers that accompany it are just as much important. I noticed a significant increase in X's launch time as well as a whopping 250 FPS with glxgears to be compared to the 150 FPS I got with my 2.4.22 setup. This is probably due to major improvements that were brought to the drivers for my i830M chipset.
  • Scaling, et al (Score:2, Interesting)

    by Anonymous Coward
    First, we need the common elements of bproc/mosix in the kernel. The specialized stuff, for each approach, may need to be kept out, but some level of generic process migration support is important.

    Second, unicasting looks to be slower. Ugh. I don't like that. That suggests to me that there are segments of code which are optimized for multi-processor use - which is great - but either there aren't uniprocessor versions, or the uniprocessor versions are highly non-optimal.

    Third, scaling needs to be improve

IOT trap -- core dumped

Working...