Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
AI Linux

Linux Kernel Could Soon Expose Every Line AI Helps Write 41

BrianFagioli shares a report from NERDS.xyz: Sasha Levin, a respected developer and engineer at Nvidia, has proposed a patch series aimed at formally integrating AI coding assistants into the Linux kernel workflow. The proposal includes two major changes. First, it introduces configuration stubs for popular AI development tools like Claude, GitHub Copilot, Cursor, Codeium, Continue, Windsurf, and Aider. These are symlinked to a centralized documentation file to ensure consistency. Second, and more notably, it lays out official guidelines for how AI-generated contributions should be handled. According to the proposed documentation, AI assistants must identify themselves in commit messages using a Co-developed-by: tag, but they cannot use Signed-off-by:, which legally certifies the commit under the Developer Certificate of Origin. That responsibility remains solely with the human developer.

One example shared in the patch shows a simple fix to a typo in the kernel's OPP documentation. Claude, an AI assistant, corrects "dont" to "don't" and commits the patch with the proper attribution: "Co-developed-by: Claude claude-opus-4-20250514." Levin's patch also creates a new section under Documentation/AI/ where the expectations and limitations of using AI in kernel development are laid out. This includes reminders to follow kernel coding standards, respect the development process, and understand licensing requirements. There are things AI often struggles with.
This discussion has been archived. No new comments can be posted.

Linux Kernel Could Soon Expose Every Line AI Helps Write

Comments Filter:
  • No. Stop. Just no. (Score:5, Insightful)

    by Anonymous Coward on Friday July 25, 2025 @07:12PM (#65546030)

    What is this fuckery? What is this god damn obsession with putting AI in everything? Seriously. It feels like I am taking crazy pills. Has everyone lost their minds? I have to be dreaming.

    • Sasha Levin is at Nvidia, that's what. He used to be at Microsoft. Of course, he's going to do work that Nvidia approves of.
    • by Gravis Zero ( 934156 ) on Friday July 25, 2025 @11:15PM (#65546326)

      What is this god damn obsession with putting AI in everything?

      It's not that AI is desired, it's that people are submitting patches partly made using AI and not telling anyone. This is the threat, not knowing. You can ban it but developers will use AI to make patches. What is happening now is merely damage control.

      • Re: (Score:3, Insightful)

        by PDXNerd ( 654900 )

        it's that people are submitting patches partly made using AI and not telling anyone.

        ABSOLUTELY THIS. LLM can write good code, or it can hallucinate and fall down its own rabbit hole of generating garbage to fit garbage. It needs *extra* review and this ensures the path to that review.

        The general reaction here these days is old-man-syndrome 'AI IS THE DEVIL' but pandora's box has been opened. You can't get rid of coding models at this point.

        • by gweihir ( 88907 )

          You can't get rid of coding models at this point.

          I disagree. Sure, they may remain as toys, but to kill their viability for real work needs very little and there are a lot of known problems already.

          • by PDXNerd ( 654900 )
            You can disagree, but your view point is the view of 'right now, this moment'. You cannot stop the idea that people are seeing an increase in personal productivity from using an LLM even if you do not, and all you see are the problems (of which there are many, I am not an apologist for LLM). You cannot stop the billions of USD and euro being thrown into LLM research.

            You can stop using it in your own world, in your own life, and I completely sympathize with that point of view, but you cannot kill their via
          • You can't get rid of coding models at this point.

            I disagree.

            It doesn't matter if you disagree because they are already being used to generate code in kernel patches. Identifying potential threats is the goal.

            • by gweihir ( 88907 )

              That they are "already" used is not a reliable indicator that this will continue. Two braincells required to understand that though.

              • That they are "already" used is not a reliable indicator that this will continue.

                Whether or not they are used in the future is irrelevant to the current problem of patches being submitted that contain AI generated code. The point is to identify potentials threats, not make a declaration of the future.

        • by evanh ( 627108 )

          The "old man syndrome" is mistaken. While some lazy comments are a "kill it with knives" mentality, most are "get real." The AI hype is all "we are turning out real intelligence" and "... super intelligence." That obviously is bullshit so it gets called out.

          • Still, as a person who originally dismissed AI and then tried it and continued to use it for everything, I think there are idiots on both sides. On one side you have people who think. Of AI as human. They say please and thank you to the AI and they expect it to act as if it cares about you and is infallible, like not deleting months of work. On the other side I suspect a lot of experienced developers can't figure out to go from their old methods to new methods involving AI and so are dismissing it. As u
      • by _merlin ( 160982 )

        So how does this make the developer honestly tag the commit as being LLM-assisted? A dishonest developer could still use an LLM and say they didn't.

        • Why should nations bother to have laws in the first place if some people violate them?

        • by allo ( 1728082 )

          But why should they? As we see, kernel developers are open to new tech and no part of the crowd is shouting "AI bad!" so there's no reason not to say "created with AI assistance." I mean, professional open source projects are often like academia, where it's usual to cite all sources and tools you're using. I'd only say that you may not need to cite AI for a "dont" to "don't" patch, just as you wouldn't have cited your spellcheck program.

    • What is this fuckery? What is this god damn obsession with putting AI in everything?

      It's not in everything. Just things that it could potentially be good at. What's this obsession with using human brains to do all the thinking in the world. I must be going crazy to think the same thing that comes up with AC messages on Slashdot is used to cure cancer and decide on world policy.

    • by allo ( 1728082 )

      Because kernel developers are no eternal boomers. When they see a technology that can help developing, they are open to it. And you bet they won't use "Slop" as people say, but know which kind of AI code they merge and which not.

  • Hah! (Score:2, Insightful)

    by JamesTRexx ( 675890 )

    Linus would sooner allow C++ to touch the kernel than AI.

    I'm sure he doesn't mind "AI" supporting a good developer as a tool, but not an "AI" to actually "contribute" "code".

    • by PDXNerd ( 654900 )

      Linus would sooner allow C++ to touch the kernel than AI.

      Did he tell you this? because recent statements by Linus Torvalds are pretty much, in a nutshell, 'AI is going to be cool, hype sucks'. (You can find the interview easily enough on google). I mean this whole process is to ensure that its crystal clear what code was written by an LLM to give *extra attention* to it during peer review.

      It's not like any LLM today has enough "intelligence" (whatever that means with an LLM) to be able to create a new VMM or IRQ manager or get rid of some locking thing, and it w

    • You have no fucking idea what you're talking about.
  • by PPH ( 736903 ) on Friday July 25, 2025 @08:55PM (#65546162)

    Replace spaces with tabs.

    • by Ken_g6 ( 775014 )

      <paperclip_machine>Now attempting to fill all of space with soda cans.</paperclip_machine>

  • for somewhat reaffirming my faith in this stupid species.
    More and more articles about "AI makes devs slower" or "workplaces insisting on using AI and workers lying about them using it". Or godshelpus idiots saying "Microsoft already uses AI to write 30% of their codebase" as if that were anything other than reason NOT to use AI.
    There's an idiot at my workplace who has repeatedly said that last one. And at work workplace townhalls, I've submitted questions with references about "what guardrails and testing p

  • One example shared in the patch shows a simple fix to a typo in the kernel's OPP documentation. Claude, an AI assistant, corrects "dont" to "don't" ...

    The error wasn't in the kernel, not in the code, just in the documentation.
    A simple spell check would have flagged or fixed that faster and with less carbon footprint.

    So, AI not needed, just word processor or spell checker.
    But, programmer is obviously ai himself, anti-intelligent, doesn't know how to use spell checker, doesn't even know he needs it.
    So, ai needs AI. ...
    AI not needed ... AI needed.

    That's the high tech Catch-22.

  • While some developers may see this as a helpful step toward transparency, others might argue that codifying AI usage in one of the most human-driven open-source projects sends the wrong message. Should kernel development really be assisted by tools that don’t fully grasp the consequences of their code?

    It is because AI doesn't grasp the consequences of what it's doing that it needs to be identified. Don't fool yourselves, AI will be used by some developers even if it is banned outright. Knowing which patches are the greatest threat will make it easier for people to identify what should be given the highest level of scrutiny.

    To augment the review system, I think there should be an outside effort to make an AI test system that evaluates patches in a compiled kernel. The purpose would be to identify if the c

    • by gweihir ( 88907 )

      To augment the review system, I think there should be an outside effort to make an AI test system that evaluates patches in a compiled kernel. The purpose would be to identify if the consequences of the patch description actually match what occurs when it's executed. I think this would be effective in reducing the amount of human review effort needed and quickly reject bad AI generated patches.

      That does sound good, but it amounts to solving the halting-problem and that one is unsolvable. I do not think this task is something that AI can sufficiently approximate to be useful. You would never know how much it missed and it may also hallucinate differences that are not there or are there but behave differently.

      • it amounts to solving the halting-problem and that one is unsolvable.

        There is no clear basis for this assertion, especially when applied to real-wold machines, rather than hypothetical machines with an infinite number of bits and storage. Presently, the halting-problem is one of many unsolved problems but that doesn't mean it is an unsolvable problem.

        I do not think this task is something that AI can sufficiently approximate to be useful.

        It appears that you have conflated my support of identifying AI-assisted patches as being in support of people making AI-assisted patches, which is not the case. What I do recognize is that people will use it, so it's better tha

        • by gweihir ( 88907 )

          it amounts to solving the halting-problem and that one is unsolvable.

          There is no clear basis for this assertion, especially when applied to real-wold machines, rather than hypothetical machines with an infinite number of bits and storage. Presently, the halting-problem is one of many unsolved problems but that doesn't mean it is an unsolvable problem.

          And there you outed yourself as completely clueless idiot. The halting problem is proven to be unsolvable.

          • The halting problem is proven to be unsolvable.

            The proofs are based on the false premise of using an infinite state machine and the exclusion of using of a second larger finite state machine.

            Details are important. Reread my post.

            • by gweihir ( 88907 )

              And more clueless bullshit. There are no "false premises" in the model used. In fact, if you change the model, it is not the halting problem anymore.

              Seriously, admit you are wrong and move on. Because you have absolutely nothing. You only show that you do not understand computability at all.

            • False premise? No, lol.
              They are, after all, modeling a Turing Machine, which has arbitrarily large storage, which means there are arbitrary possible states.
              If you put an upper bound on that, then you can, indeed, solve the halting problem, but only technically. Even a small program has such a large list of possible states that it's utterly infeasible to solve whether or not it halts.

              I have no love for gweihir, but you're wrong here.
              This is CS 101.
  • In my view, the buck needs to stop with the code submitter. When they submit - they take full responsibility. If they use AI to generate code they don't understand - if that proves to be faulty, then they get banned. At the end of the day - bad code is bad code, no matter who wrote it.
    • by gweihir ( 88907 )

      I agree. And bad or irresponsible coders are bad or irresponsible coders and their code should not make it into anything critical.

  • I am pleased to report that my optic nerve is now auto-correcting "AI assistant" to "AI asshat".
  • Bye bye Linux..developers will get sloppy (just like those lawyers who submit briefs written by ChatGPT without checking them) and just sign off on whatever the tool generates. I know how useful is ChatGPT when I supervise it, particularly for busywork, but it can easily produce compilable code but with broken semantics.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...