Become a fan of Slashdot on Facebook


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Operating Systems Software Linux

Linux Kernel Benchmarks, 2.6.24-2.6.29 38

Posted by timothy
from the impressive-span dept.
Ashmash writes "Phoronix has posted benchmarks of the Linux kernel from versions 2.6.24 to 2.6.29. They ran a number of desktop benchmarks from the Phoronix Test Suite on each of the six kernels on an Ubuntu host with an Intel Core 2 processor. The points they make with the new Linux 2.6.29 kernel are 1. there's a regression with 7-Zip compression 2. OpenSSL has improved significantly 3. a regression drastically impacting the SQLite performance has been fixed 4. the OpenMP GraphicsMagick performance is phenomenally better with this new kernel. In all of their other tests, the kernel performance was the roughly the same."
This discussion has been archived. No new comments can be posted.

Linux Kernel Benchmarks, 2.6.24-2.6.29

Comments Filter:
  • by Anonymous Coward

    I find it difficult to believe that the 2x gain in OpenSSL performance for 4K RSA private key operations is solely due to this new kernel. Such operations are, at the core, just CPU-intensive modular exponentiations. Unless the kernel has become significantly better at making use of several cores (or processors) to parallelize such operations, I don't see how that can happen.

    • by TheRaven64 (641858) on Wednesday March 25, 2009 @07:37AM (#27327447) Journal

      There are lots of variables that may affect this. The new kernel may be preempting the CPU-intensive process less frequently, reducing TLB and cache churn, and context switching overhead. I don't think RSA should be using the FPU, but if it is (even for one operation in the 10ms quantum) then switching to lazy FPU context switching would give a performance increase (or, to put it another way, not doing lazy FPU context switching - either by accident or design - would give a performance penalty).

      I didn't read TFA (obviously), but on an SMP system, particularly an AMD machine, there are a few other issues that may arise. Without good processor affinity, the OpenSSL process may be being swapped between two cores, increasing cache misses (which can easily slow down a process a lot). The memory allocation routines may have been allocating memory from the memory controller attached to one processor while running the code on the other, increasing memory latency (and, therefore, cache miss cost).

      In short, there's nothing a kernel can do to speed up CPU-bound operations, but there is a lot it can do to slow them down, and stopping doing things that slow down operations looks a lot like doing things that speed them up.

  • by multi io (640409) <> on Tuesday March 24, 2009 @07:13PM (#27321311)
    Sure, it may have gone from working perfectly in 2.6.21 to not producing a beep [] in 2.6.28, but look how fast it has become! Priorities! :-P
  • Why dang it? (Score:5, Interesting)

    by MBCook (132727) <> on Tuesday March 24, 2009 @07:18PM (#27321397) Homepage

    Neat. They benchmarked a bunch of stuff and some real changes obviously took place. I can't help but be comforted by their conclusion (paraphrased): "Stuff changed."

    How about telling me why they changed.

    • Why did 2.6.29 double it's speed doing SSL signings?
    • Why did all the graphics tests speed up some?
    • Why did SQLite performance bomb for 3 releases?

    • What was the deal with 7-zip performance changing so much? What is it stressing that other tests aren't that cause it to vary?

    There are reasons for these things. You could test and find them out. You could read the mailing lists and see if someone else posted explanations (others must have noticed the SQLite thing).

    Heck, look at this list of new features and make guesses. I'd prefer "the newly added HyperScheduler v3.732 is probably the source of this" than the article's "things got faster, neat."

    That's why I love LWN [] and the kernel page so much. They post why things changed, or at least reasonable theories.

    • Re:Why dang it? (Score:5, Insightful)

      by Sancho (17056) * on Tuesday March 24, 2009 @07:35PM (#27321723) Homepage

      I'd like to know why, too. Drastic changes in performance may mean that faster ways to do a thing were discovered. It may also mean that codepaths are being skipped that are essential to things functioning correctly. Remember the Debian OpenSSL bug?

      That's why I'd like to know why SSL signing is so much faster under the new kernel. Seeing a 2x improvement makes me wonder if something's been screwed up that could compromise my certs.

      • Re:Why dang it? (Score:4, Insightful)

        by Anonymous Coward on Tuesday March 24, 2009 @10:00PM (#27323667)

        Hey guys. Michael (Larabel, whom owns and runs Phoronix) isn't omnipotent and he does a lot to keep his site running, OS's and tasks benchmarked, and news up to date on top of extra projects like the Phoronix Benchmark Suite of which he has brought togethor almost exclusively by himself. Do yourselves a favour and do some research, and maybe even post it to Phoronix for Michael to update. Assist him and, subsequently, the rest of us to reduce the number of questions.

    • by mtippett (110279)

      The Phoronix benchmarking is intended to provide you the answers as to why. It is to highlight the stuff that has happened.

      If performance management is going on within the kernel community, then this shouldn't come up as a shock. The whole purpose of independent testing is that you see something that looks out of place, investigate and resolve. A perfect example is [] phoronix article, that showed that SuSE was trailing. Thi

      • What's so special about that example you gave?

        openSuSE's disk I/O was slower because they enabled an option that the other didn't. Not enabling that option "runs the risk of severe filesystem corruption during a crash". Looks like they changed it to be like the other distros so they wouldn't look so bad during the benchmark.

        That's nice. Compromise stability for performance. This is the type of stupid crap that makes people wonder... Gee, why is such and such so much faster?

        The other issue was already report

        • by oasisbob (460665)

          openSuSE's disk I/O was slower because they enabled an option that the other didn't. Not enabling that option "runs the risk of severe filesystem corruption during a crash". Looks like they changed it to be like the other distros so they wouldn't look so bad during the benchmark.

          That's nice. Compromise stability for performance. This is the type of stupid crap that makes people wonder... Gee, why is such and such so much faster?

          It's not quite that simple.

          See that openSUSE bug []:

          Since I wrote this, I came acro

          • If you bought a new oven and it had an option to reduce gas consumption but in "relatively rare" situations it could explode while you were cooking, what setting would you leave it on?

            I've seen other things like this over the years, and that's why I am now deploying applications on Solaris. Like all software it's not completely bug free, but I haven't had any problems, and their philosophy seems to be to not intentionally make decisions that could blow people up.

            • by mtippett (110279)

              Judging by your posts and your handle, you work in or around servers - a lot.

              You would probably be aware that security, stability, and all such things are a set of tradeoffs of risks and benefits/costs.

              You can make a system 100% secure, but it may not be useable. You can make a system five 9's stable, but you have to pay for it. You make the assessment of the risk (in this case data corruption), against the benefit/costs (double the speed in some cases).

              SuSE seemed to have made the assessment of risk with

              • Or using your analogy. The tests that the oven may blow up but save 50% on the energy bill has been shown that the net benefit is on the side of the oven that may potentially blow up!

                How so? The average yearly cost of an oven [] is $42. You're telling me that you'd accept the risk of injury, possibly death to save $21 a year?

                From my experience, most people don't have a cage, or even a rack full of servers that can sustain the loss of a single server going down.

                Most don't even run at more than 40% utilization and the performance is not that important.

                What is important is that their system is reliable and that they don't have to waste time rebuilding it.

                The server and the operating system sh

  • by gandhi_2 (1108023) on Tuesday March 24, 2009 @07:29PM (#27321633) Homepage
    in almost all benchmarks, 2.6.29 did the same or a tiny bit worse than the others. Then in imagemagic operations, sometimes 2x faster? what mem / operation combination caused this?
    • Re: (Score:3, Interesting)

      by blitzkrieg3 (995849)
      If you're really so curious you can oprofile [] and find out yourself.

      Note: I'm not defending the Phoronix guys. As a previous poster pointed out, they are inherently bad at explaining the why things are slower and sometimes they are flat out wrong []
      • Re: (Score:3, Insightful)

        by RAMMS+EIN (578166)

        ``Note: I'm not defending the Phoronix guys. As a previous poster pointed out, they are inherently bad at explaining the why things are slower and sometimes they are flat out wrong []''

        In that case, the best they can do is to stop talking about it and just stick to what they know. Knowing just what is faster and what is slower is useful by itself. It can be used as a starting point for investigating what exactly caused the speed-ups and slowdowns. If the Phoronix folks can't or don't want to do this investigat

  • by Blice (1208832) <> on Tuesday March 24, 2009 @10:47PM (#27324085)
    A lot of new code (and old code reformed) was added to try and speed up the boot process, I know that for sure. I saw some of the work Arjan did in the bootfast tree-

    fastboot: Asynchronous function calls to speed up kernel boot
    fastboot: make scsi probes asynchronous
    fastboot: make the libata port scan asynchronous
    fastboot: make ACPI bus drivers probe asynchronous
    fastboot: Make libata initialization even more async

    I don't know for sure that all of this made it upstream for this release but I know some of it did. I think you have to pass the "fastboot" kernel line for it, however. So check your kernel configs and update your grubs!

    Or LILOs, if you're weird...

    Oh one more thing.. I think the introduction of the asynchronous probing and various other things are going to start a whole new wave of bootfast tricks. For example, before it tries mounting the root file system and continuing on, it waits for device probing to finish. A comment above that code states "Waiting for device probing to finish... This is known to cause long delays in boots, for example this can take 5 seconds for a laptop's touchpad to initialize". The comment was written by Arjan, who obviously has intention to speed things up. So I think what might happen is instead of waiting for EVERYTHING to finish probing (Even if it is async), it'll just wait for the filesystem to become available (Perhaps try after IDE probes, then try after SCSI probes, then after USB, and so on.)

    I also remember there was a patch that didn't go upstream (I don't think so anyways) that added a function to be able to initialize things later on (After the boot was done). You changed the initialize() or whatever the function name was to initialize_later(), and then after you're done booting, whenever you want, you do a command and it then initializes anything you did the initialize_later() to. So you would be able to load up the webcam initialization or whatever else you know you don't use right when you boot.

    Well, where I'm going with this is that I would like to see them incorporate more of that stuff into the kernel. More boot hacks, more power saving, more efficiency. These things are only going to improve.
    • Re: (Score:3, Insightful)

      by MichaelSmith (789609)
      Oh goody. A million bizarre race conditions.
      • Re: (Score:3, Insightful)

        by Blice (1208832)
        They did it pretty good actually. They have a function that waits for certain things to sync up before continuing at places.

        And they *do* test the shit out of kernels before releasing, you know..
  • It was only on after reading the comments at Phoronix that I noticed the benchmarks used 64-bit (x86-64) kernels, not x86 as I had initially assumed. Maybe the use of kernel and compiler code that gets less testing than x86 is related to the odd performance quirks Phoronix found?

You do not have mail.