Forgot your password?
typodupeerror
Programming Software Linux Technology

Tuning The Kernel With A Genetic Algorithm 251

fsck! writes "Jake Moilanen provided a series of four patches against the 2.6.9 Linux kernel that introduce a simple genetic algorithm used for automatic tuning. The patches update the anticipatory IO scheduler and the zaphod CPU scheduler to both use the new in-kernel library, theoretically allowing them to automatically tune themselves for the best possible performance for any given workload. Jake says, 'using these patches, there are small gains (1-3%) in Unixbench & SpecJBB. I am hoping a scheduler guru will able to rework them to give higher gains.'"
This discussion has been archived. No new comments can be posted.

Tuning The Kernel With A Genetic Algorithm

Comments Filter:
  • by Anonymous Coward on Saturday January 08, 2005 @09:09AM (#11296207)
    A common criticism of Open Source is the accusation that it lacks innovation.

    I mean, common. Just look at this. Amazing.

    And even if it turns out to be not that good, it was still a good read :-)
  • Complexity? (Score:5, Insightful)

    by BurntNickel ( 841511 ) on Saturday January 08, 2005 @09:20AM (#11296244)
    So how much additional complexity is added for a 1-3% perfomance improvement? I'm all for more speed, but keeping thinks simple can often be more improtant when it comes to maintainablity and adding additional features.
  • by Illserve ( 56215 ) on Saturday January 08, 2005 @09:21AM (#11296248)
    If a parameter space is complex enough that you can use a genetic algorithm to tune it, the solutions it finds may have all sorts of unexpected potholes, bugs, etc.

    In other words, non-competitive genetic algorithms are only as smart as the fitness function you give them. If your fitness criteria aren't smart enough to cover all the bases, your solutions will have unpredictable deficiencies.
  • by Anonymous Coward on Saturday January 08, 2005 @09:24AM (#11296261)
    No, this isn't amazing.

    It's KEWL.

    No different than every open source freak over the years proclaiming how 3D desktops were so obviously better than the existing 2D desktops of the day.

    And all the while cut/copy/paste still doesn't work properly...
  • Not worth it... (Score:2, Insightful)

    by mikelang ( 674146 ) on Saturday January 08, 2005 @09:36AM (#11296302)
    1-2% gain is in the borders of statistical error. Definitely not worth the increased complexity of the solution.
  • by gclef ( 96311 ) on Saturday January 08, 2005 @09:46AM (#11296331)
    Nice troll, but your sarcasm presents a common fallacy: that work on one issue (adding features like this) means that less work is being done on some other issue (cleaning up security problems). The fact is, throwing more people at a problem does not always make it better, especially if the people you throw at it don't know the subject (which the author of this algorithm may not, can't speak for him).

    In other words: if you have someone who's good at writing Genetic Algorithms, but not so good at searching for kernel vulns, why devote that person to security? Why not let them write their contribution, and have the guys who are good at security do their part separately? Why should we only do one thing at once?
  • Re:Complexity? (Score:3, Insightful)

    by bersl2 ( 689221 ) on Saturday January 08, 2005 @09:56AM (#11296367) Journal
    It's only one guy (I think; didn't RTFA), and it's nowhere near being included in the mainline kernel. Your observation may be correct in general, but what's the problem here? If an experiment pans out, the internals will be changed down the line to incorporate the new idea; if this idea were to have yielded a 5-10% increase in performance, would you have made that comment?
  • Re:Not worth it... (Score:5, Insightful)

    by Corfe ( 246685 ) on Saturday January 08, 2005 @10:03AM (#11296386)
    It's a unique idea - what's wrong with running it for a while with your typical load (say, for a fileserver), finding some better-than-average parameters for the kernel, then running an unpatched kernel with those parameters manually entered?

    What is "on the borders of statistical error" depends on how many times the test was run, and how much variation there had been in his results before. I think it's pretty safe to assume that if he knows how to implement a genetic algorithm into the linux kernel, he knows how to handle statistics properly.
  • by Corpus_Callosum ( 617295 ) on Saturday January 08, 2005 @10:23AM (#11296448) Homepage
    The main problem with this or any other adaptive tuning mechanism is actually acquiring performance metrics.

    What is the system using to decide if a new parameter set is better than a previous? What is the fitness test?

    Some applications are disk-bound, others are CPU-bound, others are network bound. The performance dance is often non-obvious (e.g. some applications will achieve generally higher performance by allowing I/O higher priority than context switching, while others that appear to perform in a similar manner will achive higher performance by reversing that order).

    The kernel does not have any mechanism to determine if a particular application is performing better or worse, it can only really get a guage of throughput and load. While this MAY be enough to get small systemwide performance gains, in order to really acheive significant application-specific performance gains, I think that applications would need to explicitly add support for adaptive tuning by logging relevant performance metrics for the kernel to tune around.

    Thoughts?
  • Re:Complexity? (Score:2, Insightful)

    by devillion ( 831115 ) on Saturday January 08, 2005 @10:24AM (#11296452)
    Understanding (basic of) GAs is easy and so is implementation. They also work quite well. That's why they are so popular.

    IMHO, if GA implementation can be made really reliable it could maybe replace other code which may be (I don't know) even more complicated.

  • Re:Complexity? (Score:3, Insightful)

    by m50d ( 797211 ) on Saturday January 08, 2005 @10:29AM (#11296468) Homepage Journal
    Like I've said before, kernel 2.6 is simply not stable yet. Wait until they fork off 2.7, then with luck it will settle down.
  • by Earlybird ( 56426 ) <`ten.noitciferup' `ta' `todhsals'> on Saturday January 08, 2005 @10:33AM (#11296486) Homepage
    Because most people aren't intimately familiar with genetic algorithms, and because GAs are associated with machine learning/artificial intelligence, GAs are seen as somewhat mysterious and magical, and are therefore either accepted with "whoa, cool!" or rejected with "whoa, complex!" While GAs are indeed novel compared to many long-established algorithms, both mentalities are overreactions.

    In reality, the basic GA framework is "just" another efficient search algorithm, no cooler or more complex than, say, a hash table or a binary search tree; at its simplest, a GA is a way to find an optimal configuration of components without looking at all possible (potentially explosively exponential) combinations; instead, you look at just some permutations, and as you iterate through generations, applying breeding and mutation, you arrive at a generation which is statistically close to optimal.

    GAs are also in no way new or unproven technology; a nice example of mainstream use is PostgreSQL [postgresql.org]'s query planner, which uses GAs to optimize query plans.

  • Re:Complexity? (Score:2, Insightful)

    by gnuLNX ( 410742 ) on Saturday January 08, 2005 @11:44AM (#11296804) Journal
    GA's a pretty trivial to code. So very little complexity is added.
  • Re:not a panacea (Score:2, Insightful)

    by John Little John ( 842934 ) on Saturday January 08, 2005 @01:24PM (#11297494)
    They might converge on a point of attraction that is not the highest possible.

    Mutation helps with convergence issues.

    Sure the only way is to exhaustively search the "chromosome" space for every possibile combination, and computers are good at brute force!

    Um, I don't even know how many parameters are involved to be adjusted, but it does not take that many to make exhaustive search useless, even for a computer.
  • by Illserve ( 56215 ) on Saturday January 08, 2005 @01:53PM (#11297695)
    First thing: A GA is only truly effective if you let it exhaustively search the search space

    If you have the resources to exhastively search the space... you don't need a GA.

    A GA is generally used when the search space is hopelessly huge and you need to chart a course from A to(wards) B but you don't know the way.

    And in finding this solution, which is "grown", not engineered, it's much easier for unintended wierdnesses to creep in. A GA might solve problem X by ruining performance on problem Y, something that you, as a software engineer, would never even consider viable, and hence you forgot to force the fitness function to check problem Y along with problem X.

  • by ColaMan ( 37550 ) on Saturday January 08, 2005 @03:40PM (#11298506) Journal
    Sooo...... can I copy and paste an image from gPhoto into The Gimp yet? No? Call me when you guys manage to do what windows 3.0 could.
  • Re:Not worth it... (Score:3, Insightful)

    by xenocide2 ( 231786 ) on Saturday January 08, 2005 @04:59PM (#11299105) Homepage
    Toms hardware consistantly favors computer hardware that only pushes above the competition by 1 percent or less. People spend an extra 40 dollars for this performance, and you're not willing to consider that people might like a FREE performance boost of a percent?
  • by Corpus_Callosum ( 617295 ) on Saturday January 08, 2005 @08:36PM (#11300597) Homepage
    "If the GA starts with 20 predetermined tuning parameter sets and mates and mutates those, then all we have is a fancy hill-climber."

    This isn't true. The mutation alone differentiates it from a hill-climber.
    Mutation in Genetic Algorithms are supposed to act as hill-climbers *most of the time*. Mutations are not supposed to make drastic changes - that is the job of random individulas inserted into populations and recombination (mating). Mutation (of the bit flipping variety) is mostly there to provide the opportunity make good solutions better.
    The problem of hill-climbers is that they get stuck at local peaks. Genetic algorithms don't.
    Genetic Algorithms get stuck at local maxima all the time, especially in populations with low population count and low genetic diversity.
    "In order to find novel Maxima in the search-space, the GA should not be constrained - it must have the ability to exhaustively search, even though it does not (of course) do such a thing."

    I don't understand what you consider to be the constraint in this situation. The genetic algorithm does have the ability to find any part of the search space as far as I can tell.
    I need to take a look at the code, but my understanding from reading the article was that the GA starts with predefined tuning solutions and uses recombination and mutation to change them. In order to *effectively* search the search-space, more randomness needs to be present (e.g. generate a couple of random solutions each generation). Otherwise, the GA will pretty quickly draw to a local maxima and stick there (even with mutation, altough mutation *can* kick it out of a local maxima eventually - assuming there is higher peak to be found within the range of the mutation function).
    I don't see how a fitness function could determine whether or not it had found "the hill".
    Why does it have to? The point is, don't use the GA *all the time*. Use it until the system stabilizes and then hill climb for a while. The results will be better for a couple of reasons: (1) You are not running all the bad solutions at the bottom of your population all the time while hill climbing (only the best solution) and (2) Your best solution will be getting deterministically better through hill-climbing. A GA is a bad hill-climber but good at finding hills. Augmenting a GA with a hill climber will improve solutions (often dramatically).

    Writing a GA is simple. Writing a GA that delivers results, now that is a different beast alltogether. My experience has shown me that there are some rather more complicated things that need to be taken into account such as:
    • Remembering previous good solutions instead of breeding them out of the genepool. In this scenario, it is likely that there are multiple eigenloads that a machine finds itself in. Having to re-generate the correct tuning parameters each time the load switches between these *magic* states is cumbersome. But if the genes that worked in the other eigenstate are still in the genepool, it becomes trivial. Often, this is achievable by augmenting the fitness function to take into account the previous aggregate fitness of a solution or somesuch.
    • Keep the right amount of randomness flowing. If the GA does not have a good source of randomness to draw on, it will operate much like a hill-climber and will oscillate between a few solutions. It will get stuck.
    • Keep the population correctly sized
    • Have multiple populations that occassionally interact
    • Statistical mating is important
    • In a case like this, the best current solution should get the most timeslices. Period. And this should be like 10x the total number of timeslices for the entire population (e.g. population=20, each generation is 220 timeslices - top solution gets 200, each other gets 1)
    • Tuning mutation and crossover is extremely important
    • Fitness test. Fitness test. Fitness test.
    • many many other things to consider

Isn't it interesting that the same people who laugh at science fiction listen to weather forecasts and economists? -- Kelvin Throop III

Working...