Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux IT

Linus Torvalds Growing Frustrated By Buggy Hardware, Theoretical CPU Attacks (phoronix.com) 52

jd writes: Linus Torvalds is not a happy camper and is condemning hardware vendors for poor security and the plethora of actual and theoretical attacks, especially as some of the new features being added impact the workarounds. These workarounds are now getting very expensive, CPU-wise.

TFA quotes Linus Torvalds:

"Honestly, I'm pretty damn fed up with buggy hardware and completely theoretical attacks that have never actually shown themselves to be used in practice.

"So I think this time we push back on the hardware people and tell them it's *THEIR* damn problem, and if they can't even be bothered to say yay-or-nay, we just sit tight.

Because dammit, let's put the onus on where the blame lies, and not just take any random shit from bad hardware and say 'oh, but it *might* be a problem.'"

Linus Torvalds Growing Frustrated By Buggy Hardware, Theoretical CPU Attacks

Comments Filter:
  • It's easy (and true) to say it is the fault of the hardware vendors.

    However, once the product ships, it's unfixable. The only solution is a software fix.

    And theoretical attacks are acknowledged vulnerabilities. They only remain theoretical until some hacker needs it to bypass or access data and they implement it. Maybe we'll know about that transition, maybe we won't.

    • by SchroedingersCat ( 583063 ) on Monday October 21, 2024 @01:03PM (#64881529)
      Going out on a limb here, but won't picking different vendor with less hardware problems fix the issue? Let the free market forces weed out weak vendors.
      • by Zangief ( 461457 )

        that way lies madness; the most popular cpu will be the most studied for vulnerabilities

        you may as well decide to let the market decide and then...choose the second or third most popular platform

        • whats wrong with any of that?
        • The most commonly stolen car was the Civic. It wasn't the most stolen because it was the most vulnerable, but because it had the largest market for underground resale of vehicles or parts. Similarly, is Linux less vulnerable to malware than Windows, or is Windows just the biggest target because it's by far the most common OS to attack?

          Now in some ways, I think x86_64 architectures have a big problem in that they are so extremely complex with such long architectural lives. A CISC processor adapted to do s

      • Yeah fuck me for buying from the only two x64 vendors.

        There's always IBM POWER. https://en.wikipedia.org/wiki/... [wikipedia.org]

        • You also have a few ARM desktop and server processors to choose from now, but it's only a matter of time that they find security vulnerabilities in those.

        • by neuro88 ( 674248 )

          Yeah fuck me for buying from the only two x64 vendors.

          There's always IBM POWER. https://en.wikipedia.org/wiki/... [wikipedia.org]

          I love IBM POWER! I own 2 POWER9 systems from https://raptorcs.com/ [raptorcs.com]. A Talos II based system and a Blackbird based system. I also intend to buy one of their new systems in 2025 if I can scrounge up the money.

          However, at least POWER9 is vulnerable to meltdown (and I think aarch64 is as well). So far as far as speculative execution vulnerabilities go everything is vulnerable except like Itanium.

          Maybe with AI guided compilers VLIW could actually be viable.

          • AI-guided compilers?

            So the program might run and do what its supposed to do this compile? Maybe not?

            Yeah lets not do that.

            How about an AI-guided ISA and architecture design instead? It would most certainly be something where memory isnt flat at all, making it quite different from current systems and will probably demand well matured programming language support.
          • by Mirddes ( 798147 )

            apart from being at the bottom of the ocean, why is the Itanic immune to speculative execution vulnerabilities?

    • Right but Torvalds point is still important even if the work still has to be done. Are there changes the hardware vendors could make to do a better job of catching some of these things up front? Do they need to reconsider some aspects of their architecture with the security first mindset? If it's purely theoretical does the code get written but not implemented until it's needed? Because it's also true that if we have to implement software fixes that in some cripple the hardware after release you're not real

    • Re: (Score:3, Informative)

      by Anonymous Coward

      It's easy (and true) to say it is the fault of the hardware vendors.
      However, once the product ships, it's unfixable. The only solution is a software fix.

      There's another level between hardware and OS: microcode updates shipped by the CPU manufacturers including both Intel and AMD.

      These are loaded into CPUs at boot to patch problems found after manufacturing. Fixes made entirely in microcode don't require OS changes.

    • by Pimpy ( 143938 )

      In some cases, yes, but many hardware issues can also be patched around through microcode/firmware updates.

      The issue is more how much time/effort you want to spend hardening yourself against theoretical problems. Until something can be demonstrated practically, I certainly wouldn't be spending much time on it. This has always been a point of tension between security researchers and kernel/SW developers, long before HW-based attack vectors started to become more common.

    • by jd ( 1658 )

      True, but AMD and Intel firing large numbers of QA staff from their chip lines and trying to accelerate development to stay ahead of the game isn't helping matters.

      This was a gamble that was always doomed to lead to spectacular crashes, it was merely a case of how many and when.

    • by gweihir ( 88907 )

      Yes. But I think the hardware vendors should provide developer resources here to mitigate the problem they created.

    • Linus has a big point. Take the speculative execution attacks everyone shat their pants about. At the time, on desktop Windows systems (and a lot of Linux systems) every app had access to APIs to inspect (and in many cases, modify) the memory of every other app running as that same user by default, meaning an attacker didn't exactly have to abuse processor bugs to steal confidential information surreptitiously, they could just have at just about everything without much trouble anyway through a myriad of sim
  • by serviscope_minor ( 664417 ) on Monday October 21, 2024 @12:50PM (#64881503) Journal

    I'm pretty sure the cache timing side channel attacks were a theoretical curiosity, until suddenly, very suddenly they weren't and someone demonstrated grabbing private keys between isolated VMs running on one machine.

    The trouble with theoretical attacks is you never know when they become non theoretical but they can turn into very nasty 0-days very quickly when they do.

    • by sjames ( 1099 )

      The thing to watch for is that sometimes the set-up to demonstrate the exploit is so specific and so designed to be exploitable that it becomes the moral equivalent of placing a plate of cookies in a kindergarten, loudly announcing "Gee, I hope nobody eats these delicious cookies when I leave the room for half an hour!", then announcing that the "hungry kiddee" attack is a threat level 25 out of 10 and claim that it would be irresponsible not to lock all cookies in a vault with at least a 10 disc lock.

    • And there are some very obscure attacks that can be made. A well designed processor, by 2023 standards, may have a very insecure design as soon as a new attack is discovered. Ie, some things in the past: pay attention to how long an operation takes, how much current was used, and that gives you a slight edge in determining a branch taken vs not taken, and that gives you a step up in figuring out the decryption key or what the protected data looks like.

  • by ctilsie242 ( 4841247 ) on Monday October 21, 2024 @12:52PM (#64881509)

    It would be interesting to see a hardware "minumum standard", perhaps. Worst case, if someone has cruddy hardware, emulate their crap, Bochs style, and then point the finger at them for having something that cannot be done in software, so the entire segment has to be emulated on a low level, just to ensure that it won't cause issues with the rest of the system.

    • It would be interesting to see a hardware "minumum standard", perhaps. Worst case, if someone has cruddy hardware, emulate their crap, Bochs style, and then point the finger at them for having something that cannot be done in software, so the entire segment has to be emulated on a low level, just to ensure that it won't cause issues with the rest of the system.

      The problem is that the current generation of hardware flaws that Linux is being asked to work around are for a fairly new class of security vulnerabilities, vulnerabilities that exploit modern CPUs' branch prediction and speculative execution. We can't define a list of all of the things hardware must do, and how it must do it, to be safe from these vulnerabilities because we don't yet fully understand them. We do know that we could get rid of them all by eliminating speculative execution, but the perform

      • It would be interesting to see a hardware "minumum standard", perhaps. Worst case, if someone has cruddy hardware, emulate their crap, Bochs style, and then point the finger at them for having something that cannot be done in software, so the entire segment has to be emulated on a low level, just to ensure that it won't cause issues with the rest of the system.

        The problem is that the current generation of hardware flaws that Linux is being asked to work around are for a fairly new class of security vulnerabilities, vulnerabilities that exploit modern CPUs' branch prediction and speculative execution. We can't define a list of all of the things hardware must do, and how it must do it, to be safe from these vulnerabilities because we don't yet fully understand them. We do know that we could get rid of them all by eliminating speculative execution, but the performance cost would be severe, so people are understandably reluctant to take that step.

        Responding to myself; there's a little more context I should have included:

        Torvalds' concern about theoretical attacks is both a valid point and highly debatable. The problem is that these vulnerabilities are identified and demonstrated in laboratory conditions that are often highly unrealistic, because the researchers' job isn't to produce a usable, practical exploit chain, it's to demonstrate that an attack is possible. So as soon as they have something solid, they publish and get a CVE assigned. Does

        • by tlhIngan ( 30335 )

          Many of the speculative execution vulns have turned out to be much ado about nothing in practice. At least so far. And the mitigations have gotten increasingly complex and impactful. So Torvalds is (sensibly, IMO) asking whether we should continue, or whether we should wait for more evidence that the vulns are real before we accept the complexity and performance cost of mitigating them. The problem is that no one can possibly know what the right answer to that question is.

          Why not both? Have mitigations avai

  • Who's going to maintain the list of buggy hardware we need to avoid?

    • Ask ARM to sponsor the Intel bad list and ask Intel to sponsor the ARM bad list.
    • by UnknownSoldier ( 67820 ) on Monday October 21, 2024 @01:25PM (#64881569)

      A list of zero vendors is trivial to maintain. /s

      EVERY platform has bugs. Who is going to prioritize what is "critical" versus "mostly harmless"?

      • Well it's the hardware vendor's responsibility. That seems to be the context here (which is a little thin in the summary). A vendor has to take an official position on whether or not a particular mitigation is needed and then, if that slows their chip down, the slowed down performance is the real performance. The controversy appears to stem from vendors not wanting to take a clear position - they want to still claim full performance and blame the kernel developers for applying the fix overly broadly unde

  • by Anonymous Coward

    Screw you guys, I'm going home!

  • Simple really - just refuse to run on crap machines.
  • by darkain ( 749283 ) on Monday October 21, 2024 @02:16PM (#64881693) Homepage

    So Linux is perfect and free from bugs or theoretical attacks and will never need to be updated ever again?

    Linus has always had a poor outlook on systems security, its been a constant struggle for a very VERY long time to get him to understand the implications of various attacks, even against the Linux kernel itself. This is nothing new. But now he wants his lax attitude to be more pervasive in other parts of the system too.

    • that is not at all what he said and your second point is complete bs, that whole discussion is that Linus does not see any difference between a bug that affects the kernel in some way and a "security" bug in the kernel, as he sees it they are both bugs and the one can very easily be turned into the other one. That is _all_ that discussion was about. Which pissed of a bunch of security researches that wants to collect fame and bounties.
  • "So I think this time we push back on the hardware people and tell them it's *THEIR* damn problem, and if they can't even be bothered to say yay-or-nay, we just sit tight.

    How about getting back to our roots ? If your hardware product does not use fully open firmware, we will ban it from being used on Linux. Right now, this is happening because Linux Foundation is owned by Fortune 500 Companies. So we get what we deserve by allowing this to happen.

    • Unfortunately (or not?), the days of Linux being maintained and controlled by pure-minded volunteers are long gone. Trying to shift back to that a) is not practical; and b) would likely result in a buggier, more vulnerable ecosystem.

      If you want highly-skilled devs, you likely need to pay them - and pay them well. How many stories have we seen on Slashdot about FOSS projects languishing specifically because they *didn't* have financial support?

      • The struggle is real, says the person who maintains JPEG code for (most of) the world. I can barely make the equivalent of a starting teacherâ(TM)s salary (which in the U.S. is rhetoric for âoenot much moneyâ, because education is underfunded here just like OSS is underfunded in general) by working full-time (much more than full-time during some weeks) as the maintainer and principal developer of three prominent OSS projects. I have managed to barely make that a viable career since 2009, but
    • ```
      How about getting back to our roots ? If your hardware product does not use fully open firmware, we will ban it from being used on Linux.
      ```

      Linux started on i386 with IBMPC BIOS.

      What roots are we talking about?

      You're going to disable everything but RISC-V and Power on an open source kernel?

      Please lay out your strategy.

  • I greatly prefer the software workaround unless the hardware solution can be proven to have zero performance impact. That results in the best outcome for everyone. Those people who have high security requirements can enable it, and those people who don't, don't have to.

    I get it it's annoying to maintain, if anything the onus is on the hardware manufacturers to support the development of the software, but having in software makes it customisable

  • by nomadic ( 141991 )

    Dude complains a lot doesn't he

    • He speaks plainly about the problem that hardware vendors do not take responsibility for bugs in the hardware. It's a very real problem with very real consequences, and one of those consequences is unfair blame placed on Linux, and its chief maintainer.

      If you don't want to read his legitimate complaints, affect change in how the hardware vendors handle the situation. Or just don't read it.

  • x86 hardware has had speculative execution vulnerabilities built into it since the Pentium. There's nothing new here. He should probably be glad that at least one company has fixed their problems in hardware.

    • They developed along with "new" ideas of the time like preprocessing doing a database operation to check if an instruction was assembled from a language that allows overloading or not.

Center meeting at 4pm in 2C-543.

Working...