Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Linux

Can AI Be Used to Fine-Tune Linux Kernel Performance? (zdnet.com) 66

An anonymous reader shared this report from ZDNet: At the Linux Plumbers Conference, the invite-only meeting for the top Linux kernel developers, ByteDance Linux Kernel Engineer Cong Wang, proposed that we use AI and machine learning to tune the Linux kernel for the maximum results for specific workloads... There are thousands of parameters. Even for a Linux expert, tuning them for optimal performance is a long, hard job. And, of course, different workloads require different tunings for different sets of Linux kernel parameters... What ByteDance is working on is a first attempt to automate the entire Linux kernel parameter tuning process with minimal engineering efforts.

Specifically, ByteDance is working on tuning Linux memory management. ByteDance has found that with machine learning algorithms, such as Bayesian optimization, automated tuning could even beat most Linux kernel engineers. Why? Well, the idea, as Wang wryly put it, "is not to put Linux kernel engineers out of business." No, the goal is "to liberate human engineers from tuning performance for each individual workload. While making better decisions with historical data, which humans often struggle with. And, last, but never least, find better solutions than those we come up with using our current trial and error, heuristic methods.

In short, ByteDance's system optimizes resource usage by making real-time adjustments to things like CPU frequency scaling and memory management.
This discussion has been archived. No new comments can be posted.

Can AI Be Used to Fine-Tune Linux Kernel Performance?

Comments Filter:
  • Why? It's very simple to understand: optimization does NOT equals to stability.

    You can make a function/program faster and, at the same time, make less stable; and this has been proven million of times already.

    AI is unable to inner-processing cause/effect relationship and will always be.

    • Not to mention..one of the dangers of over reliance on non inductive AI is that your progress of whatever area you're turning ALL of the work over to AI in will stop. Why ? Because humans won't know how to do it. Or this is how I see it.
    • What are you mumbling about? A program is not a bicycle, it is not less "stable" because it goes faster (in fact, neither is the bicycle). Any code may or may not have bugs, and, at the same time may waste resources (time, memory) or it may be optimum.

      And, by the way, AI (ChatGPT, for instance) can understand cause-effect relations just fine.

      • I'm not mumbling. Your limited knowledge is funny.

        How can you make a program faster?

        Remove the checks. This leads to less stable.

        Why would an AI do this? Because, as I wrote but you didn't read, "AI is unable to inner-processing cause/effect relationship and will always be."

  • "In short, ByteDance's system optimizes resource usage by making real-time adjustments to things like CPU frequency scaling and memory management."

    Breathtaking.

    • by piojo ( 995934 )

      "In short, ByteDance's system optimizes resource usage by making real-time adjustments to things like CPU frequency scaling and memory management."

      Breathtaking.

      I think you missed a detail:

      There are thousands of parameters

      These aren't parameters like Amazon's "2 trillion parameter" model. They are tweakables, like the minimum period in nanoseconds for which a single task will run. So try tweaking a few thousand of those numbers by hand.

      • And this is just normal machine learning, older tech, not new fangled OpenAI style chat-bot stuff. There are already optimizers that take a smallish piece of code and try out a huge number of permutations to discover the best result - not really AI but just classic algorithms.

        • by piojo ( 995934 )

          Right. There are a lot of algorithms, but I think of them as "hill climbing" which is the simplest to describe and understand: take a current possible solution, sample new potential solutions in every direction, and move to whichever potential solution has the highest value. Repeat.

  • by Rei ( 128717 ) on Monday November 20, 2023 @08:00AM (#64017991) Homepage

    ... done with tomato-cultivation greenhouses in the Netherlands, balancing out the huge number of factors that go into maximizing revenue (quantity vs. quality vs. timing) while minimizing costs (heat, lighting, fertilizer, CO2, labour in pruning, binding, harvest, etc), with respect to measured and forecast weather and the current state of cultivation and controlling all parameters (heat influx (base, roof), CO2 influx, roof vents, individual fertilizer influxes, when to order what labour, etc).

    It utterly trounced human experts. By large margins. That said, it did make some weird decisions to do so - for example, sometimes having particularly high temperatures at times when it was also having workers work (not exactly comfortable), so "worker comfort" probably should have been another parameter. But overall, spectacular results.

    • Aha, that is the cause of all those tasteless tomatoes ?

      • by Rei ( 128717 )

        Did you miss the word "quality"? :)

        Higher quality tomatoes fetch higher prices. The AI actually focused more on quality than the humans.

        There's a number of techniques you can use in cultivation to increase the flavour. One of the tricks of the trade is to "salt the tomatoes", that is, significantly increase the EC right before harvest (higher concentration of fertilizer salts in the irrigation fluid), and/or spread out waterings (let the growth medium dry out more); the foliage isn't happy with you, but th

        • Nice story you got there.
          It's just a shame it is total B.S.
          --> reference needed.

          • by Rei ( 128717 )

            I encountered it as a lecture from the team doing the research, and I don't remember the name, but it was probably this group [hoogendoorn.com].

        • by Rei ( 128717 )

          BTW, for those not familiar with what I mean by "high yield", I took this picture [googleusercontent.com] a couple weeks back at the research greenhouse.

          I guess since we're on the topic of tomatoes, for anyone who cares:

          1) Judge your tomatoes' balance between foliage growth and fruit/flower loading; you want a balance. If the balance is too shifted toward the former, the plant is stocky, big deep green leaves hanging down, somewhat pyramidal in shape, thick stem (sometimes hollow), etc. If the balance is too shifted toward the la

          • by piojo ( 995934 )

            Thank you, that (and your other posts) are very interesting! You've also given me ideas about how to grow my plants better on a small terrace (if I build some overhead scaffolding). If I can grow upward, I can fit more plants, give them more light, and still be surrounded by green. The only issue it doesn't solve is storms. Though if the leaves are ripped off by intense wind, the top ones may grow back.

            I wonder how many benefits I'd lose by doing (Kratky) hydroponics versus watering as you suggest, but I su

            • by Rei ( 128717 )

              There's not really a clear dividing line between hydroponic and non-hydroponic cultivation - hydroponic just implies an at least largely sterile, largely inert growing medium with little CEC or long-term water holding capacity and where decomposition does not meaningfully contribute to nutrients. In commercial cultivation, whatever the medium, root space available to the tomato plants is really small compared to the size of the plants, because they don't need that much, because they're watered consistently

              • by piojo ( 995934 )

                Thanks for the advice and tips! You've got me doubly motivated to find out which part of my A/B/C hydroponics solution contains the calcium nitrate so I can turn it into an A/B solution. That will make it so much easier to rebalance EC after rain dilutes reservoirs. But is there any disadvantage to this? I already have a visible microbes growing in the solutions. Will it get a lot worse if I mix micronutrients and nitrogen?

                The product you linked (or one like it) seems like quite good value for if/when I wan

        • No, taste. Heirloom varieties of tomatoes are okay looking but nothing in particular, but have much, much better flavor.
      • by AmiMoJo ( 196126 )

        That would be supermarkets. They prefer bland and consistent over tasty but seasonal or with lower yields.

        • Not just the supermarkets. Here, most tomatoes (many from the Netherlands) are tasteless and barely ripe but they cost 2 quid a kilo. Riper, tastier versions are available but they're much more expensive, up to 10 per kilo and most people don't buy them. Growers, buyers and customers have together decided that cheap and not very good are what the market mostly wants.

      • in America. Maybe they're better somewhere else but over here they're basically balls of fiber. I kinda miss actual oranges. I mean, are people just buying them for juice? That seems expensive and pointless even compared to the pricy stuff that isn't just reconstituted sugar water and chemicals...

        Fun fact, most orange juice has all the chemicals that make it taste like orange taken out because they're not very shelf stable. Then artificial flavors are added back in to make it taste like orange juice agai
      • The riper the tomato, the shorter the shelf life. Riper tomatoes are softer and more easily bruised. And the varieties big growers sell to big buyers are bred for quick color, thick skin, disease resistance, and high yield, not much flavor.

    • I wonder what the weird decisions will be here?

      I know that frequency scaling and core-re-energising takes time, so a lot of effort has already been spent on trying to pre-predict when those things will be necessary and start doing them before it's actually required. In the meantime, if you're super keen on fast response times, you just set your scheduler to 'performance' and pay the additional energy bills ("performance" puts all your cores on maximum speed 24x7).

      In this one area, I'd imagine some AI could

      • by HiThere ( 15173 )

        AI means all sorts of different things to different people. And the different things often have different approaches. Perhaps what is being called AI here more like what you're proposing than like an LLM.

      • by AmiMoJo ( 196126 )

        These days you often can't have all cores on max. The system just can't remove enough heat, or supply enough power, or both.

        You can have individual reach very high clock speeds, if the others are idle. Good for single thread workloads. You can have lots of cores all running at the same time, with lower clocks.

    • I don't know if you necessarily need an AI to do that. I've worked in system optimization for over a decade and there are techniques to do this that don't require a neural network. Running well designed experiments [wikipedia.org] and doing regression will often yield the same (or better) results. Neural networks are only advantageous when the data are extremely non-linear. AI also has the drawback of not being able to extrapolate due to the way the activation functions operate.
  • by TJHook3r ( 4699685 ) on Monday November 20, 2023 @08:05AM (#64017999)
    When we implement automation or AI, the result will apparently *never* be that humans lose their jobs... they will just get to do something more *interesting* instead, which is nice!
    • they will just get to do something more *interesting* instead,

      If you enjoy getting off on people desperate to find new means of sustaining themselves and being considered completely obsolete in every single aspect that might allow them to do so, then yes, it certainly will be interesting times for everyone.

      INB4 "They'll just rEsKiLl, and switch to nEwLy CrEaTeD jObS!": These LLMs are Good Enough(TM) for many CEOs to be seriously considering firing entire teams if not entire departments. There is no skill these LLMs cannot learn to the Good Enough(TM) standard. The

    • by jbengt ( 874751 )

      . . . they will just get to do something more *interesting* instead, which is nice

      ! Reminds me of the backhanded blessing "May you live in interesting times."

    • will we all do following the massive automation boom [businessinsider.com] that's going on. Be specific.

      People stopped answering years ago, right around the time I started asking them to be specific. Before that they'd tell me the jobs were going to be so futuristic my tiny unevolved brain couldn't comprehend them.

      Back when I was a kid it was biotech. I just saw a biotech job that required a masters and 7 years of experience in a 12 year old technology that paid $24/hr. The minimum payments on the student loans you'd nee
    • When we implement automation or AI, the result will apparently *never* be that humans lose their jobs... they will just get to do something more *interesting* instead, which is nice!

      You joke, but this is what has happened throughout all of human history. The invention of the plough dramatically improved efficiency of farming. The invention of the engine improved the application of energy. In every case the result has been a dramatic and fundamental impact on society and the work we do.

      We will endure. You will still have a job, maybe a different one, but there's always something to do. Every technological advancement has improved societal efficiency, not removed the fundamental requirem

      • What if we've hit the point at which middle class jobs just aren't needed? The army of white-collar drones doing fairly automatable jobs has grown and grown but in all likelihood, they might be replaced. Think back to the typing pool - typists just disappeared when everyone got a new PC and could type. Now we could see the middle-managers get pushed out next (their day is mainly producing reports for seniors and filling in holiday requests). It's clearly impossible for the jobs market to grow and grow - eve
  • BTDT (Score:4, Interesting)

    by cstacy ( 534252 ) on Monday November 20, 2023 @08:20AM (#64018043)
    Around1988 at BBN we developed a system that uses what today would be called "AI" to fine-tune packet switching networks. The packet switching nodes (which did TCP. X.25, and virtual circuits) had hundreds of fine-tune adjustments, and this system turned he knobs all over the network based on it's real-time observations of traffic/workload. This was a contract for the NSA, and was deployed on their networks. It is pretty obvious that "AI" is good for tuning complicated (and NP-complete) systems like this. It is surprising it took this long for someone to think of applying it to kernels. (Is this actually the first time?) It was real-time machine learning but we didn't glorify it with that term. Self-tuning statistical models, not NNs. We didn't call it "AI" back then, either; we reserved that term for things like rule-based expert systems, robots, and vision applications.
    • If running the LLM is more expensive in resources than the improvement in efficiency, it's just a waste.

      • The purpose of the exercise is to use an LLM to create a tuning program which can then run indefinitely and on many systems. Even a fairly costly LLM run will be earned back quickly.

        • by HBI ( 10338492 )

          Gauging the performance improvements would be an interesting task across multiple systems in different environments. It would have to be profiled extensively under various loads to assert there were in fact gains.

          Interesting problem.

        • Unlikely, all AI models have a timely component, but the problem domain keeps evolving. When you train a model to cope well with a given enviroment, you only have a few years before the environment diverges. You can try to top up the model with new capabilities that address the divergence, or retrain from scratch. Either way, you're in a cycle of rebuilding every few years. Make sure the cost of the rebuild is worth it when amortized over the useful lifetime of the updated system.
    • The distinction between AI and optimization has always been hazy

      https://link.springer.com/arti... [springer.com]

    • Every generation or so, the terms for this field are rebranded, due to accumulated negative connotations which set back funding. This happens when the hype outpaces the capabilities and the large number of young researchers who entered the field beause of the hype must confront reality and competition. People eventually move on to the next thing, and the core researchers plod along oblivious until another breakthrough 20 years later. We're just about due for another cycle methinks.
    • Middle out doesn't scale... but maybe if we added AI...

  • Same as how intel improved its Battlemage drivers performance by x2.7 in Stable Diffusion, by using a machine learning optimization toolchain [intel.com].

    But this is a toolchain, and the important word here is tool. What we call AI serves as a means to accelerate tasks that previously required significantly more time. While some individuals may reject or overlook this reality, doing so puts them at a disadvantage compared to those who leverage AI to their advantage.

    If the Linux community collectively opts not to utili

  • This reminds me of the DOS-days of running a program to optimize DOS startup by rearranging the startup files by trial and error, doing a few reboots to see what worked.

    I guess they can call bruteforce for AI now?

  • I would have thought an expert system, rather than a neural network, would be best for this, but fashions change.

    If you're going to use neural nets or genetic algorithms, then I'd strongly advise against LLMs. I'd probably still use a NN or a GA, but I wouldn't train it on text on the Internet, or indeed on text at all.

    What you probably want is to train the NN on a large set of example loads (actual applications running realistic loads), with benchmarking tools profiling the effects of that load. You then t

    • I would have thought an expert system, rather than a neural network, would be best for this, but fashions change.

      They're suggesting Bayesian optimization, genetic algorithms, and evolutionary algorithms. They aren't suggesting building neural networks or LLMs into the kernel.

      • by jd ( 1658 )

        I wasn't suggesting they were. I had worked on the principle that the system would be outside the kernel. But (a) it still has to run on something, and (b) it still has to produce a result that's still valuable by the time the conclusion has fed back to the kernel.

        (a) is important. If it's the same machine, then the odds are high that the application that is doing the tuning is taking far more system resources than it is making available through tuning. If it's remote, then those are still resources your OS

  • I see no reason AI couldn't be used to help streamline performance. But you'd still need people with performance tuning experience to determine whether its suggestions were valid, and whether or not those changes cause unintended side effects.

    In my experience, hardly any software is ever performance-tuned. It's not that hard, actually. Use a profiler, identify the bottlenecks in the code, find ways to:
    - Reduce the number of iterations
    - Reduce the time required for each iteration

    Of course, sometimes it's mor

  • Years ago analyzing the distribution/frequency of instruction combinations resulted in a redesign of the machine microcode that caused no significant speed gain. It turned out the most frequently used (now) optimized instruction combinations were mostly resident in the OS' wait-loop code!

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...