Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Improving Linux Kernel Performance 97

developerWorks writes "The first step in improving Linux performance is quantifying it, but how exactly do you quantify performance for Linux or for comparable systems? In this article, members of the IBM Linux Technology Center share their expertise as they describe how they ran several benchmark tests on the Linux 2.4 and 2.5 kernels late last year. The benchmarks provide coverage for a diverse set of workloads, including Web serving, database, and file serving. In addition, we show the various components of the kernel (disk I/O subsystem, for example) that are stressed by each benchmark."
This discussion has been archived. No new comments can be posted.

Improving Linux Kernel Performance

Comments Filter:
  • oh wait, thats not ported to nix yet....
  • by dWhisper ( 318846 ) on Saturday January 18, 2003 @04:55AM (#5107014) Homepage Journal
    I'm just curious what they are quantifying performance against. Everything here seemed to be strictly on the Network side of things. Are they trying to increase the actual Kernal processing of the individual threads for the network applications (File Serving, DB, and Webserving), or are they just measuring the eff. for the processing of data packets for the services.

    It sounds interesting, but it looks like the tuning is done specifically on the IBM platform, which makes me wonder. Linux already blows and MS product away for these applications, so I'm curious what they are comparing the results to. Did they just take an arbitrary point (processor load) for specific applications, or are they creating a specialized measurement (like SysMarks in Windows) that is only valid in their test suite.

    Anyway, it should be interesting to see where it ends up, eventually.
    • I think it is reasonable to do benchmarks against likely realworld applications. It seemed clear to me that they understand that the benchmark may not represent a load anyone may actually encounter, but that is outweighed by the ability of someone else to come along and use the same benchmarks.

      Some scientific/mathematical benchmarks would also be good to see.

    • Linux already blows and MS product away for these applications, so I'm curious what they are comparing the results to.

      You obviously need to look better at how Linux scale on 8P machines and more before making such statements.

      Go to http://www.tpc.org and look at the results for Linux and Windows for 16P systems and more, Linux is non-existent, for a reason.
      • by Anonymous Coward
        Looked at www.tpc.org but could not see anything which tells otherwise that linux still has better performance than windows.. even tough there was a very little selection of linux setup testet, those which were tested had exelent performance. The only thing that windows had better, was price/performance on a low end server, that means if you do not need to much performance, then the windows solution might be the right shot ( as of tpc.org anyway).

        Seems like the post was probably more a troll than any important issue, since the site had 90% of tests on windows servers, and 2% linux, it cannot be taken to seriusly.

        Please correct me with real tech facts.. not just some marked bs to tell what windows might be, but are not.
      • After checking out their list, there are only two test machines running strictly Linux. At least of the non-clustered setups. Beyond that, they are all Win 2k Data Center, .NET Server, IBM AIX and Unix. The ones that are running are running Red Hat Advanced Server, and it does not specify if they are optimized.

        Beyond that, they are not using a unified standard as their monitoring system. All of the Win machines use Com+ and the non-win use a variety.

        They also say that most of the best Price/Performance machines are running Windows 2000 Server, or .NET server (betas?). Most Linux admins would argue this, especially given the news article on /. last week that said it is cheaper to run. I wonder how accurate their measures are based on the monitoring tools.
      • by virtual_mps ( 62997 ) on Saturday January 18, 2003 @07:51AM (#5107220)

        Go to http://www.tpc.org and look at the results for Linux and Windows for 16P systems and more, Linux is non-existent, for a reason

        That reason would be the cost of the tests and the fact that most linux hackers don't have pockets as deep as billg's.
      • by nicodaemos ( 454358 ) on Saturday January 18, 2003 @08:24AM (#5107268) Homepage Journal
        Hmmm .... tpc.org is an interesting organization. It is a non-profit who is funded by memberships from the hardware/software companies on which it produces benchmarks.

        According to their website, "Full Members of the TPC participate in all aspects of the TPC's work, including development of benchmark standards and setting strategic direction. Full Membership costs $15,000 per calendar year."

        Wow, a large percentage of the benchmarks are using MS operating systems. Oh look full members get to set benchmark standards. Mmmm, the only pure OS company who is a full member [tpc.org] is Microsoft. I wonder what kind of conclusion can be drawn .......
        • So, you can:

          Do it yourself, or,

          Trust them, potential interest conflicts and all.
          This is the usual story when these "mine's better" discussions arise.
          For benchmarks, who has a reputation for

          Knowing what they are about, and

          Remaining objective?

        • by Anonymous Coward
          Go get a fucking clue. The reason MS has some of the best numbers, is, guess what, that their systems are among the fastest at running TPC-C !

          TPC-C is not a perfect benchmark (in fact all cluster numbers or "cluster in a box" numbers should be disregarded or completely separated from "single DB instance" numbers). Still it takes a lot of work to get good numbers on TPC-C. A lot of that work will benefit normal DB users.

          MS has good numbers because they did that work.
          Oracle also has excellent numbers on Unix and Windows systems. DB2 also.

          Oh, and I don't like MS numbers. When scalability or performance is required I'll recommend Oracle or DB2 over SQL Server any day of the week.

          But to think that the bench are taylored to MS because they are members. They are as much taylored to MS as they are to Oracle, HP, IBM, Sun (well maybe not Sun, you'be need major tayloring to make Sun look good on any bench :-)).

          You'll see open-source DB vendors join tpc.org when their software reaches the level of performance needed to show decent numbers on _current_ TPC benchmarks (I'm sure TPC-C will be replaced as it becomes increasingly irrelevant). Until then Op-Src zealots will feel the need to spread FUD about tpc.org.

      • You obviously need to look better at how Linux scale on 8P machines and more before making such statements

        If linux scalability is really an issue, beyond 8 processors, then i guess that the SGI Altix 3700 [sgi.com]is just
        vaporware/gare.

        I suggest that you read the following articles that debunk the myth of the 8processor barrier :

        SGI Busts into Linux with 64-Processor Scalability [linuxplanet.com]

        NEC Calls Dibs on Breaking Linux Eight-Processor Limit [linuxplanet.com]

        I personally hope that these benchmarks can be run against more recent kernels and a full description of optimizations and patches used disclosed.
        Considering that SGI is using a [somewhat] standard 2.4.19 kernel to scale this well , I am certain that the results will be much better.

    • IMHO most optimization and tuning issues are roughly about three things: a static component (eg. RAM used for caching), a variable component (eg. RAM used for each request) and a 'panic' type component (eg. extra work needed for requests when running out of RAM). Its typically these type of differences in behaviour and system load which are interesting to compare. Even with a M$ box.

    • They often compare performance to older kernels or even other department's patches.
    • Linux already blows

      Amen. +5, Insightful.

  • by Anonymous Coward
    is here [207.46.196.102]

    Some howto's include recompilering the kernal, enabling UDMA, turning off logging and enabling MMX enhancements.
    • by r6144 ( 544027 ) <(moc.uhos) (ta) (k6r)> on Saturday January 18, 2003 @06:19AM (#5107117) Homepage Journal
      I have installed linux several times, on different machines, now (mostly redhat). UDMA settings are almost always right on modern machines. The only exception is an old P166 machine with a very old HD, where the original kernel 2.2 does not support DMA on it, but 2.4 do (transfer rate 5MB/s -> 10MB/s). Fussing with the kernel usually doesn't give much benefit, and is definitely not for newcomers.

      Things actually useful are: disabling unnecessary services on startup (if you don't use atd, don't start it to save start-up time, and in many machines it is unnecessary to detect hardware changes using kudzu upon startup); for machines with multiple HD's, put the swap on the faster HD.

      • Over the last two years on a production database server running RedHat, I found that I needed to recompile the kernel quite a few times to get new hardware support/bugfixes not found (at the time) in the stock RedHat kernel. Support for the Promise controllers on Asus motherboards, for example, tends to lag a few months behind the appearance of the hardware, the first patches, and support in the vanilla kernel release. More recently, the latest RedHat kernel (2.4.18-19.7.x) for Athlon does not enable IO-APIC, because apparently it locks up some laptops. Well, I think it's a good idea for my server, so away I go getting kernel-source.rpm.

        Now, I respect the testing and validation RedHat provides with their kernels, so I use them when I can. Arguably, if I would use more server-oriented hardware it wouldn't be an issue, but my budget is, to put it mildly, modest.

        But you're right in the sense that there is probably little to be gained in saving, say, 50KB in your bzImage by cutting out drivers that you don't use, etc. At least I don't see it subjectively, maybe somebody else can volunteer some benchmarks, but I think the attitude that you can really see the difference by recompiling your own kernel for performance is a holdover from the days when the major distros only compiled for i386 and memory was a whole lot tighter.
    • No wonder M$ apps are so slow and bloated...

      All their programmers are out pretending to be helping *nix admins. :)

    • I tried it, but I got this slow, ugly 80s throwback operating system that didn't do DMA, had no logging, and had piss-poor hardware support. Then to top it all off, I had to keep phoning this guy up to ask if I could use it.

      "Bollocks to that", I thought, and put Unix back on.
    • Amazing. An IP which isn't a goatse link :P.
  • by Chatz ( 8725 ) on Saturday January 18, 2003 @05:07AM (#5107033)
    It would be great to see a follow up/some examples on how these tools are used to actually track down a performance problem. I have and I have seen many others take some performance data and make completely the wrong judgement about what is the expected behaviour, what is the bottleneck, and what to do to fix it.

    I was also suprised to see that they still use some of the old performance monitoring tools like looking at /proc, and other ascii tools, rather than something like PCP [sgi.com] that collects all these statistics together so that you can look at any combination of subsystems on the same time line. Then they could have graphs showing the interraction and load on the disk, cpus, vm, network etc.

    • by spongman ( 182339 ) on Saturday January 18, 2003 @05:19AM (#5107045)
      I agree. This article is essentially useless. They're basically saying "hey look, we made it faster, wohoo!" but they completely gloss over the details of how they did it. Where's the cumulative patches against various stock kernels?
      • by Chatz ( 8725 ) on Saturday January 18, 2003 @05:25AM (#5107052)
        That's probably a bit harsh, both IBM and SGI have worked pretty hard to get scalability improvements into the linux kernel. The article does mention some of these things:

        Some of the issues we have addressed that have resulted in improvements include adding O(1) scheduler, SMP scalable timer, tunable priority preemption and soft affinity kernel patches.

      • There is some interesting info at the bottom of this page [redhat.com] outlining some improvements Oracle and RedHat have made to this linux kernel regarding things such as SMP processor affinity and asynchronous I/O. Presumably these are open source changes -- the artical doesn't mention them at all.
      • That's because the changes were merged into the 2.5.x kernel series.

        For the list of things IMB has had a hand in lately: There was the above mentioned 0(1) schedular, lockless PID allocation, faster threads, IRQ load ballancing improvements and the retooling of several drivers' SMP locking. That's just what I can remember without actually going through my kernel archives.
  • by Subjective ( 532342 ) on Saturday January 18, 2003 @05:27AM (#5107057)
    Wouldn't we (always) want to improve the Linux kernel performane in comparison to itself?

    Why is what we compare it to the most important issue?



    Sure, we want to see how the Linux kernel is performing, but that's unrelated to increasing it's performance - when working on the performance of a single part, people built a test for that part, and tweaked it.

    No benchmark or comparison is required in this case.

    • by r6144 ( 544027 ) <(moc.uhos) (ta) (k6r)> on Saturday January 18, 2003 @06:12AM (#5107106) Homepage Journal
      When running with multiple CPU's, the kernel instances running on these CPUs need regulation when they access shared data. Such regulation is usually implemented with locks. A simple approach is to use a small number of "big" locks (like a lock that makes sure that only one CPU can run actual kernel code). This is very simple and easy to debug, but may cause poor performance because one CPU cannot (for example) do network transfers while another is reading disk, while this should be allowed in principle. So we should use finer-grained locks. However, as we make locks more and more fine-grained, we have more and more locks, so things get messy, hard to debug, and locking/unlocking overhead goes up to make performance degrade for fewer-cpu machines. Because of such a cost, we should make locks finer-grained when it actually improve performance much according to benchmarks.

      Of course this applies to something else, like making transfers zero-copy, too.

  • by Anonymous Coward
    >time make clean bzImage modules
    [...]
    real 6m2.519s
    user 5m13.950s
    sys 0m20.080s

    => efficency: 93.6%
    (2.4.18,xfs,ide)
  • by rufusdufus ( 450462 ) on Saturday January 18, 2003 @05:56AM (#5107094)
    These benchmarks, like so many you see nowadays, do not include or even mention deviation across benchmark runs. There is no evidence that the tests were run more than once in order to achieve a more statistically accurate view of the benchmark numbers.
    In theory, all benchmarks should come with an average value, and an error margin. Without this, the data should be not be trusted. It not only implies that the margin of error *might* be over 100%, it indicates the people running the bench marks don't know what they are doing.

    There are a lot of reasons benchmarks can have errors, one of them being the benchmark program itself can be broken. How would you know that the numbers returned on some test weren't random if you didnt run it more than once?
    Also, disk drives and networks have latencies which can make a huge difference; those difference can wash out apparent benefits of OS tweaks.

    • It wasn't some guy in his garage, and the data was presented in graph form (not chart). You can assume that for each data point, the testing methodology was the same, and that the trend-line results were the most important pieces. Besides, they admit up front that they're only trying to improve kernel performance, not guarantee that Apache version $foo.0 can server 2000 hits per second.

      --Robert
  • by dmeranda ( 120061 ) on Saturday January 18, 2003 @06:01AM (#5107097) Homepage
    Why does it seem that all these benchmarks are primarily concerned with CPU performance or network throughput or single-disk reading and writing? For a large category of enterprise applications (which this paper says it is trying to address), I/O performance can usually be the most important part.

    The problem is that the typical PC hardware is just not designed for that. Large proprietary Unix or mainframe systems usually have multiple very high speed buses; a single 32-bit PCI bus is rather low-end in comparison. Now of course this is not Linux's fault; but then again Linux is not just a PC operating system! So I guess my question is if this is about benchmarking Linux for enterprise use, how about some information about Linux running on enterprise-class hardware rather than suped up PC's. I'm sure IBM must have a few resources there.

    In particular I'm interested in how the Linux kernel is designed to handle multiple independent I/O buses. Are the I/O schedulers weighted down with locking issuesor interrupt contention. Or what about the allocation of memory buffers between faster and slower I/O devices. Or even it's support for advisory I/O operations (hinting) that some proprietary OS's provide? What about asynchronous I/O?

    And of course Linux suffers from the general Unix philosophy when it come's to giving I/O the same level of attention as CPU. For instance there are lots of processor use controls, such as process nice levels, processor affinities, real-time schedulers, threading options galore, etc. But how do you say that a given process may only use 30% of the I/O bandwidth on a particular bus? And those are things that mainframes were good at, so how does Linux on mainframes compare?
    • by g4dget ( 579145 ) on Saturday January 18, 2003 @07:37AM (#5107208)
      In particular I'm interested in how the Linux kernel is designed to handle multiple independent I/O buses.

      By running multiple kernels. Seriously: the way to get great performance out of PC hardware is to buy lots of it and cluster it. You still end up paying less for more performance than with the high end systems.

      • But what if you are running a big database? Building a database cluster is not exactly simple.
        • There are a bunch of commercial products that make building distributed data bases fairly easy. IBM promises that with one of their DB2-based products, you basically just plug in a new machine and point it at the master database server.

          Some open source equivalent would sure be nice. But even something homegrown for particular applications isn't too hard; usually, you can find an obvious field pretty easily to distribute and balance database content to different servers by.

    • by virtual_mps ( 62997 ) on Saturday January 18, 2003 @07:55AM (#5107228)

      The problem is that the typical PC hardware is just not designed for that. Large proprietary Unix or mainframe systems usually have multiple very high speed buses; a single 32-bit PCI bus is rather low-end in comparison.


      A single 32 bit PCI bus is anemic these days. That's why high-end servers based on ia32 processors include multiple PCI busses, increasingly PCI-X (133MHz, 64bit). Note that servers based on other processors are increasingly moving away from proprietary busses and using the same PCI you'll find in those intel-based systems.
  • by Anonymous Coward
    I've been reading the comments from some Mozilla people ever since Apple came out with Safari based on KHTML, and it's been suggested that the bloat and delay of Mozilla comes from too many developers. Makes me wonder if Linux will succumb to the same problem.
    • Sure, here [freedos.org] are the first signs. Hard to believe? Even Bob X. Cringely says "Even today, you can still get to a C: prompt under Windows XP, which means a disk operating system is hiding there no matter what Microsoft wants us to believe." [pbs.org] I don't know what he means with this, but it has DOS in it.
      • I'd say that's a rather strange conclusion. The only thing the 'C:' means there is a non graphic shell to use the OS.

        My cellphone has something called Explorer, very similar to the M$ one. You can browse somekind of filesystem with it. Does that mean it's running windows? Does that mean there is a disk in it? Download cygwin and then windows can come up with '/root/ $:'.

      • Just because it is in print does not mean it is true. Cringely is wrong on a lot of things, and this is one of them. MS maintained the drive lettering construct for backwards compatibility purposes, and with each revision, there are fewer and fewer limitation because of it (in 2k, ms introduced the ability to mount drives as folders in existing file systems, a unixy like feature). In the nt/2k/xp family, dos programs can only run in a dos virtual machine (NTVDM), somewhat similar to the java model.

        ostiguy
    • The developer momentum behind Linux is somewhat more diffuse than in Mozilla. There are thousands of device drivers to build and maintain, for instance. Work performed on those device drivers doesn't "bloat" the main kernel, but does drive up the developer count substantially.

      Not to say that featuritis isn't a threat. But ironically, the very "disadvantage" of Linux, its monolithic design that microkernel hackers love to bash, is making it pretty hard to add new features willy-nilly. If we were using the HURD, the kernel would be 900 megs by now... (and Emacs would be a kernel module)
    • Not THAT bloated. (Score:2, Interesting)

      by r6144 ( 544027 )
      As slow and bloated as mozilla? Probably not. Although the code does look a bit messy and bloated in some places, a bit like sendmail or gcc (i.e., code size and speed may be good, but there is still a lot of code that is not easy to maintain).

      Mozilla uses C++ (and most methods are virtual) and component interfaces like XPCOM. Such things probably enhance developer's productivity, but they incur quite a bit of overhead in code size and (less so) in speed.

      It is great that core developers actually care about code size and instruction-level speed (such as the recent syscall patch, or those highly optimized inline functions in headers), and there are many people sending patches to clean up code. Maybe linux won't get as bloated as mozilla after all...

  • by Anonymous Coward
    Benchmark junkies are abound, around and have wet dreams over these articles.

    I am one of them.

    Please, mooore!!!
  • by Anonymous Coward on Saturday January 18, 2003 @06:20AM (#5107119)
    "In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be."
  • Sub kernel? (Score:3, Interesting)

    by Jedi Binglebop ( 204665 ) on Saturday January 18, 2003 @06:29AM (#5107135) Homepage
    What about kernel developers creating a sub version of the kernel (so that only those who choose to use it) to log and relay information on performance of that kernel on various users' machines?

    Is this a bad idea? Would it take too many hours of extra work?

    -JB
  • by ehack ( 115197 ) on Saturday January 18, 2003 @06:51AM (#5107154) Journal
    I wish there were some interactive workload benchmarks - I know this is history, but when the kernel went 1.2 I found my machine really slow; the benchmarks were better but somehow the usability had gone down. It would be neat to measure the way the mouse tracking feels, the "snap" with which menus open in an application, Netscape getting a page and rendering it, etc. . Kernel compilation and numerics are not the main use of a desktop machine these days ...

    On a related note, my Mac Powerbook was really sluggish until I managed to kill some unneeded processes; they weren't really eating up time by themselves, but were somehow impacting system reactivity: The load factor hardly moved but the system became responsive to mouse clicks.

    • The contest [optusnet.com.au] benchmark might be what you are looking for. It tests system responsiveness by running kernel compiles under different kinds of load.

      Still based on kernel compiles, granted, but at least it tries to measure responsiveness. Been used heavily to benchmark recent kernels - check Kernel Trap [kerneltrap.org] for results.

      The Linux scheduling latency [zip.com.au] page of Andrew Morton might be useful as well. Alas, kernel patches tend to work on x86 first before PPC..
  • by AtomicX ( 616545 ) on Saturday January 18, 2003 @07:07AM (#5107172)
    I agree that I/O is a weakness of Linux currently and that it needs a lot more attention. CPU speeds and the ability of Linux to make the most of the processor is very good and has already been very well developed. With CPUs having advanced as far as they have in the past few years means that the CPU is no longer the main bottleneck of the system. I/O technologies have stood pretty much still with only small advances, so no wastage or inefficiencies on the part of the OS are acceptable.

    It is a pity that Linux like Unix developers have become a little stuck in their ways - hopefully they will do their best to address this in the 2.6 and 3.0 kernels.

    I like the idea of a modularised kernel, where people could use the I/O system that best suited their setup - but this could involve an awful lot of division and arguments and the number of bugs that would result could be huge. Perhaps Linux itself could automatically adapt the way it works more to suit its needs - hence solving the problem of Linux hugely varying performance. Does anyone else have any suggestions or comments on this?
  • "Some of the issues we have addressed that have resulted in the improvements shown include adding O(1) and read copy update (RCU) dcache kernel patches and adding a new dynamic API mod_specweb module to Apache."

    Uhmm... isn't this considered cheating?

    source code for the patch [apache.org]
  • by Anonymous Coward
    rm -Rf /
  • The article lacks substance, specifically what did they tune to arrive at those results they claim. None of that basic information is included in the report.
  • by Featureless ( 599963 ) on Saturday January 18, 2003 @11:50AM (#5107848) Journal
    The IBM paper is interesting, but beyond doing these straightforward kinds of measurements, I can think of a lot of better approaches to improving kernel and core application performance, based on research I've seen... When I was doing profiling work on supercomputer stuff a few years back I surveyed the tools and found some systems that use really novel approaches which could definitely be adapted to this purpose. I suppose word doesn't really get out about some of this stuff; anyway, take a look and see for yourself:

    S-Check [nist.gov]

    S-Check starts with your original source code and points suspected of being bottlenecks. It adds artificial delays at the specific points throughout the parallel code. These delays can be switched ON or OFF. The switched delays generate numerous new versions of the program, with the delays simulating adjustments in code efficiency. S-Check methodically executes the many variants, recording delay settings and corresponding run times. S-Check analyzes the recorded entries against a linear response model using techniques from statistics. The results are a sensitivity analysis from which program problem areas can be identified. This provides a portable, scalable, and generic basis for assaying parallel and network based programs.

    Paradyn [wisc.edu]
    (overview) [wisc.edu]

    "...a heuristic, goal-seeking algorithm was coupled with a dynamic instrumentation package to drive an automated, systematic inquiry into the performance of a parallel application."

    The upshot is tools which can instrument a running system on the fly, and use statistical techniques that identify "hot spots" by looking for the amount of "collateral damage" when adding artificial delays to a particular location. You can even go farther, mapping out relationships, etc.

    These are approaches that came out of parallel supercomputing, because in that field traditional approaches to benchmarking and profiling are often useless and/or impractical, and the systems (and programming problems) have become so complex that effective hand tuning becomes nearly impossible as well. Of course the kernel isn't so simple either, and these days you have parallelism to boot... I would love to see these techniques solving a wider range of problems.
  • And in conclusion, graphs are going up... so I'm happy.

    Cheers
    Stor
  • Look at the emails of the peoples :
    2 from ibm and one from AMD . It seems amd is
    looking at intel boxes ?? "The architecture used for the majority of this work is IA-32 (in other words, x86), from one to eight processors. We also study the issues associated with future use of non-uniform memory access (NUMA) IA-32 and NUMA IA-64 architectures."
    Hmm i am shure the next hammers could do the NUMA maybe they try do do it better in linux .
  • I don't trust any article that calls it a "kernal" [ibm.com].
  • One thing that's hard to measure is desktop performance.

    I have a crap all in one mobo, with shared memory Graphics without DRI support (ok i needed a pc quick), KDE is super clunkey under 2.4, with the CK performance patchset.

    Under 2.5 the desktop is quick and smooth, applicartion seem to load a lot faster, Java applets don't hog the CPU.

    So, if your running linux on the desktop, and you feel sufficiently compitent. Start testing 2.5.

"Show me a good loser, and I'll show you a loser." -- Vince Lombardi, football coach

Working...