Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Linux Technology

Linus Torvalds on 'Hilarious' AI Hype (zdnet.com) 42

Linus Torvalds, discussing the AI hype, in a conversation with Dirk Hohndel, Verizon's Head of the Open Source Program Office: Torvalds snarked, "It's hilarious to watch. Maybe I'll be replaced by an AI model!" As for Hohndel, he thinks most AI today is "autocorrect on steroids." Torvalds summed up his attitude as, "Let's wait 10 years and see where it actually goes before we make all these crazy announcements."

That's not to say the two men don't think AI will be helpful in the future. Indeed, Torvalds noted one good side effect already: "NVIDIA has gotten better at talking to Linux kernel developers and working with Linux memory management," because of its need for Linux to run AI's large language models (LLMs) efficiently.

Torvalds is also "looking forward to the tools actually to find bugs. We have a lot of tools, and we use them religiously, but making the tools smarter is not a bad thing. Using smarter tools is just the next inevitable step. We have tools that do kernel rewriting, with very complicated scripts, and pattern recognition. AI can be a huge help here because some of these tools are very hard to use because you have to specify things at a low enough level." Just be careful, Torvalds warns of "AI BS." Hohndel quickly quipped, "He meant beautiful science. You know, "Beautiful science in, beautiful science out."

This discussion has been archived. No new comments can be posted.

Linus Torvalds on 'Hilarious' AI Hype

Comments Filter:
  • Something that has to be worked on, but mostly on a training standpoint is the way the LLM fail at programming.
    It is not very graceful to make a code that look exactly like a code that run but don't.
    The mistakes made by the NN should be easy to spot, or pointed by the system itself, such as being capable of testing the code in action, and if it can't make it work well enough, use a pseudo-code explaining what needs to be done on that spot.

    • It takes decades to train a human, and we make mistakes constantly. I believe AI will get the job done as soon as we work out better ongoing feedback mechanisms and give the models the equivalent of emotions and instincts.

      But we really should be working on the hardware first. The amount of power required to build these things is ludicrous when you compare them to the capacity and power requirements of the human brain.

    • You need to set your expectation right. From personal experience, current LLMs can't always generate complete and accurate code on the first iteration. But they are improving.

      What they give you right now is a substantial boost in productivity. For instance, they will:

      - produce code that is mostly accurate, you will need to go through it to make it fully functional but it is way better than coding from scratch.
      - analyze your code and give you helpful suggestions on how to improve it.
      - spot bugs in your code.

  • Pretty on point... (Score:5, Interesting)

    by Junta ( 36770 ) on Friday April 19, 2024 @04:16PM (#64408824)

    It's certainly categorically new and will have some applications, but there have been some rather persistent "oddities" that seem to limit the potential. Meanwhile some impossibly large amounts of money are being thrown as if the age of the artificial super intelligence is now a few months away.

    Fully expect one of a few ways the scenario ends poorly for the big spenders:
    -Turns out that our current approaches with any vaguely possible amount of resources will not provide qualitative experience significantly better than Copilot/ChatGPT today. It suddenly became an amazing demo from humble hilarious beginnings, but has kind of plateaued despite the massive spend, so this scenario wouldn't surprise me.
    -A breakthrough will happen that gets to "magic" but with a totally different sort of compute resource than folks have been pouring money into, making all the spending to date pointless.
    -The "ASI" breakthrough happens and completely upends the way the economy works and renders all the big spending moot.

    • Even without a totally different sort of compute resource, I think algorithmic development (steady, or sudden) may very well reduce the compute required by 99% or more. The fact that EVERY parameter, reflecting ALL knowledge about EVERYTHING on the internet, is used every time for generating each and every word (token)... that can't be necessary. Mixture of experts models (or something) will fix this.
      • That is not the mainstream belief in the industry. Read Sutton's bitter lesson [utexas.edu], and realize that Peter Norvig convinced everyone at Google that simple models and a lot of data always trump more elaborate models.

        The trend is not for mixtures of experts, which people liked in the 1990s. The trend is to let the data magically solve the applied math modelling problem, There will be a reckoning, but it likely won't be in your or my lifetime.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      > as if the age of the artificial super intelligence is now a few months away

      I think people are too obsessed with the AGI. We don't actually even need AGI for the AI to change the world radically. But I think the best guess is between 2028 - 2040.

      > making all the spending to date pointless.

      This scenario become impossible when Google released results from AlphaFold2. Even if that would be the only thing that AI research ever manages to create, it would still be enough to make all the spending worth of

  • It seems to me that one of the following must be true. Either true, human-level artificial intelligence will be achieved

    1. within the next 20 years, or
    2. sometime between 20 and 100 years from now, or
    3. not until at least 100 years from now, or
    4. never.

    I doubt the last case is true, unless civilization destroys itself somehow before it happens. In the other cases, well, there's really no excuse for not planning for how we're going to safely integrate artificial intelligence into our civilization with a minimum of h

    • by dvice ( 6309704 )

      Surviving with automation is actually quite simple. Governments need to buy all of the companies to themselves. After that, government has two options:
      - Get profits from products and share money to people without job
      - Give free products, like food to the people.

  • by ihadafivedigituid ( 8391795 ) on Friday April 19, 2024 @04:50PM (#64408942)
    Linus totally misses the point, which is kind of unusual.

    GPT-4/5/6 might not replace him as a kernel architect, but it sure as hell is (and will increasingly be) making a ton of people in a lot of industries waaay more productive. There isn't an infinite supply of work, so a lot of jobs will go away--never to return.

    And no, this isn't some millennial/Zoomer potshot: I'm two years older than Linus ...
    • by RitchCraft ( 6454710 ) on Friday April 19, 2024 @05:03PM (#64408990)

      No, he gets it. There's way too much BS associated with this right now. "Autocorrect on steroids" is spot on.

      • by DarkVader ( 121278 ) on Friday April 19, 2024 @05:45PM (#64409100)

        They're both right.

        LLMs are garbage, completely incapable of replacing people, basically autocorrect on steroids. Lots of jobs will be lost. "Productivity" as measured by the amount of bullshit generated, will increase exponentially.

        And the enshittification of everything will continue unabated.

        • by ihadafivedigituid ( 8391795 ) on Friday April 19, 2024 @05:57PM (#64409118)
          Try using GPT-4 (not whatever free version you tried last year) for something like debugging, code review, or data extraction and then try to tell me this with a straight face.

          I thought it was bullshit too until GPT-4 came out and I used it for some non-trivial tasks.
        • IIS had been in use in some form for 60 years, for the last 20 of its cycle people sat around waited for it to answer our browser query or predict our choices and never realized it was doing it the whole time.
        • by dvice ( 6309704 )

          I think people misunderstand the power of LLM.

          LLM is not good at giving you answers to anything because of the hallucinations. This is what most people try to use it for and this is why they feel it is not good for anything.

          LLM is good at modifying the input you give it. For example you can give it a book and ask it to write a summary of it. Or you can give it all the medical research papers in the world and ask it to create a summary of research papers related to X, or you can give it some code and ask it

          • Yep, right on the money. It's great at playing devil's advocate, too, which is invaluable.
          • Your examples are not convincing.

            You talk about summaries, but you don't define what a summary *is*, and you have no way to measure how well a purported summary is produced by an LLM, relative to some standard. You're just wishful thinking about the usefulness at this point in time.

            In actual fact, a summary, however reasonably defined, has certain properties such as describing the content of some text in fewer words than the original. That makes it a kind of lossy compression algorithm. The problem i

      • It's like you read my post and then answered someone else's.

        Once again, and a little louder: it doesn't have to be AGI to cause huge disruptions. My experience and the experience of many others is that current LLMs in the GPT-4 class can be huge productivity enhancers. Connect the dots.
      • Heh. Totally agree, though it might be more accurate to label it as auto complete on steroids
    • Linus is a smart person who doesn't have trouble having insights about anything in the field he is an expert in. Smart people who can pick up complex subjects from a few Google searches have no use for current AI.
    • as of right now, you can do pictures sing and change your voice its no where from taking over the world
    • Won't this AI code result in hacky code? Easy to make, very difficult to maintain. Especially without AI?

      Right now, I think LLMs are not nearly efficient enough to run on relatively modest hardware. As I have been playing around with TaskWeaver some 6 months ago, I couldn't help but enjoy the things I could do with it. Today I make a new TaskWeaver setup, its own code-base appears 3 times as large and it won't work without Docker anymore.

      Docker on WIndows...that is a special kind of hell, so I setup a fresh

      • by dvice ( 6309704 )

        > Won't this AI code result in hacky code? Easy to make, very difficult to maintain. Especially without AI?

        As Torvalds said, humans have no problem creating bad code without AI, so there is nothing new here.

    • Linus totally misses the point, which is kind of unusual.

      GPT-4/5/6 might not replace him as a kernel architect, but it sure as hell is (and will increasingly be) making a ton of people in a lot of industries waaay more productive. There isn't an infinite supply of work, so a lot of jobs will go away--never to return.

      And no, this isn't some millennial/Zoomer potshot: I'm two years older than Linus ...

      I'm not sure your intuition is correct. Sure the supply of work isn't infinite but it does increase when productivity goes up.

      Look at a website designer, in the early 90s you were writing HTML and CSS by hand, drawing icons with crappy editors, etc, etc.

      Now, you've got crazy libraries and full-fledged website builders, I'm guessing a modern web designer is MUCH more productive.

      The result? There's waaay more website designer jobs out there, that's partially because the Internet is bigger, but also because yo

  • Pattern of change (Score:5, Interesting)

    by Tablizer ( 95088 ) on Friday April 19, 2024 @05:17PM (#64409016) Journal

    > he thinks most AI today is "autocorrect on steroids."

    That is a step up, Jetsons or not.

    A historian on NPR noticed that past technical booms have a common pattern: investors over-estimate the short-term impact but under-estimate the medium and longer-term impact.

    Most the first batch of prominent Dot-Com co's folded, but the next generation of co's completely changed commerce, social interaction, and news.

    The first railroad co's overbuilt track and most failed or had to merge to survive. But the vast network of tracks eventually got heavily used and revolutionized commerce.

    If AI follows that pattern, most initial ideas and companies will flop, but will plant the seeds for the next generation to thrive on.

    However, that may be survivor bias. There was an AI boom in the late 80's that failed to make a significant dent. We don't know if the current batch will similarly stall for a few decades.

  • You know how new non-tech users struggle with Linux and quickly run back to Windows/iOS. Well an LLM interface could help with that - have it translate user commands (optional voice interface) into actions in Linux, either to run automatically or display for the user to run themselves. CoPilot for Linux - Would be surprised if someone is not already working on it.
    • You are probably talking about WarpAI.

      Available currently in a free and paid version. Installs and runs its own Terminal application inside Linux. It didn't run when I tried to install it in a Linux server edition without a GUI. You may have more luck with that, but I don't expect it. However, if you run Linux with a GUI together with WarpAI, the concept of 'Linux terminal' becomes a lot less difficult. Just tell what it is you wish to accomplish and it will either do it for you and/or create the scripts fo

    • by dvice ( 6309704 )

      > Well an LLM interface could help with that - have it translate user commands (optional voice interface) into actions in Linux

      User: Computer, remove my files...
      Computer: Done!
      User: ... in the trash bin.

  • The Hype (Score:3, Insightful)

    by La Onza ( 7334544 ) on Friday April 19, 2024 @07:57PM (#64409344)

    The hype was inserted when somebody cleverly but callously put “Intelligence” in the brand name Artificial Intelligence. Nobody can even describe what regular, natural intelligence is let alone describe it sufficiently to distinguish between natural and artificial intelligence. If they just claimed that it’s a significant beneficial improvement over the existing algorithms that would be finebut claiming we can create a conscious intelligence - or that we are even moving in that direction - has not been justified. I say take the word “intelligence” out of the name, calm down and make the best of this advancement.

    • by Bumbul ( 7920730 )

      The hype was inserted when somebody cleverly but callously put “Intelligence” in the brand name Artificial Intelligence. Nobody can even describe what regular, natural intelligence is let alone describe it sufficiently to distinguish between natural and artificial intelligence. If they just claimed that it’s a significant beneficial improvement over the existing algorithms that would be finebut claiming we can create a conscious intelligence - or that we are even moving in that direction - has not been justified. I say take the word “intelligence” out of the name, calm down and make the best of this advancement.

      No, the word "intelligence" is not the problem. The problem is the ambiguity of the word "artificial". Many people read it as in "artificial light", which basically means REAL light from MAN-MADE light sources. How it should be read is as in "artificial smile" - i.e. the intelligence not being genuine and real - it is just faking it.

  • Linus better talk about things he understands. A spinlock maybe.
    Machine Learning is not within the grasp of someone who only did system programming his whole life.

The Tao doesn't take sides; it gives birth to both wins and losses. The Guru doesn't take sides; she welcomes both hackers and lusers.

Working...