Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Security Linux

Two Linux Kernels Revert Performance-Killing Spectre Patches (phoronix.com) 103

Friday Greg Kroah-Hartman released stable point releases of Linux kernel 4.19.4, as well as 4.14.83 and 4.9.139. While they were basic maintenance updates, the 4.19.4 and 4.14.83 releases are significant because they also reverted the performance-killing Spectre patches (involving "Single Thread Indirect Branch Predictors", or STIBP) that had been back-ported from Linux 4.20, according to Phoronix:

There is improved STIBP code on the way for Linux 4.20 that by default just applies STIBP to SECCOMP threads and processes requesting it via prctl() but otherwise is off by default (that behavior can also be changed via kernel parameters). Once that code is ready to go for Linux 4.20, we may see it then back-ported to these stable trees.

Aside from reverting STIBP, these point releases just have various fixes in them as noted for 4.19.4, 4.14.83, and 4.9.139.

Last Sunday Linus Torvalds complained that the performance impact of the STIPB code "was clearly way more expensive than people were told," according to ZDNet: "When performance goes down by 50 percent on some loads, people need to start asking themselves whether it was worth it. It's apparently better to just disable SMT entirely, which is what security-conscious people do anyway," wrote Torvalds. "So why do that STIBP slow-down by default when the people who *really* care already disabled SMT?"
This discussion has been archived. No new comments can be posted.

Two Linux Kernels Revert Performance-Killing Spectre Patches

Comments Filter:
  • Consider (Score:4, Insightful)

    by 3seas ( 184403 ) on Sunday November 25, 2018 @10:42AM (#57696536) Homepage Journal

    sometimes I feel the responsiveness of my windows system running on an i7 is slower than a commodore 64. It should make people wonder with all the advances in chip manufacturing, speed and..... Oh wait Moores law doesn't apply to user experience.

    • Re:Consider (Score:5, Insightful)

      by 110010001000 ( 697113 ) on Sunday November 25, 2018 @10:45AM (#57696544) Homepage Journal
      Moore's Law has been dead for many years now. We can only expect single digit improvements in CPU performance from now on. Of course, someone will reply with "what about quantum computers?" but those people don't even understand what quantum computers are.
      • Oh Christ you again. Moore's law is about transistor count, not speed.

      • Re:Consider (Score:5, Insightful)

        by gweihir ( 88907 ) on Sunday November 25, 2018 @03:02PM (#57697584)

        Indeed. And eventually even those single digit improvements will go away. Maybe we can start writing better software now?

        • by HiThere ( 15173 )

          The only real way forwards appears to be parallel processing. Unfortunately, many workloads have limiting serial components, and others appear to, because designing good parallel algorithms is really difficult. And, FWIW, there's suggestive evidence that correct, rather than usually correct, algorithms are going to be really hard to do. I don't think it's been proven impossible for most useful cases, but I wouldn't bet money.

          It may well be that neural net type applications are the optimal approach.

          • by gweihir ( 88907 )

            Since parallel architectures have been studied for a long, long time (anybody remember Transputers?), we do know that most things do not really benefit from more cores. The thing were they do best is server loads with lot of independent tasks being run. But most standard stuff does not benefit a lot or not at all. In addition, most coders cannot do multi-threaded software, as that is a lot harder than it looks. Deadlocks, races, etc. are not fun and testing loses effectiveness fast.

            • by HiThere ( 15173 )

              No. What we know is that using our current algorithms most things don't really benefit for more parallelism. You can't reasonably use the same algorithm on a process for parallel computation as for serial computation. And designing good parallel algorithms is *HARD*. So we've only got a few.

              There's also suggestive evidence that there's often not a good algorithm, but only quite good heuristics, and often you can test the answer faster than you can solve it. When you can't, you gamble that multiple heuri

              • by gweihir ( 88907 )

                We have been designing parallel algorithms for 40 years now, because it was always clear that multi-core will eventually be the only way to scale and because we have had massive parallel systems for about that long. What we have is hence the results from intensive studies. It is not much. It is likely close to what we will have long-term.

                Hence our "current" algorithms very much include parallel ones. If you think that we do not have more good parallel algorithms is a result from too little research, you are

              • by Bengie ( 1121981 )
                I personally think the biggest block to parallelism is everyone used to using pre-made algorithms or libraries and attempting to shoe-horn their problem domain into a limited collection of general purpose parallel algorithms. I find that fit for purpose parallel algorithms and data structures are quite abundant. I've seen rolling my own for a long time. I started parallel programming around the age of 11. I found it quite intuitive. I see race conditions as a form of edge case. All you need to do is design
          • by Bengie ( 1121981 )
            I've seen a lot of designs that were made single threaded because contention was too high to be useful concurrently, but it was a self-fulfilling prophecy because the engineers never realized that perfect is the enemy of good. There are many cases where certain data types can be lossy and not hurt anything, and in these cases, contention could be virtually eliminated if the engineers could think outside the box.

            I see engineers designing locks with the idea of preserving ordering, even when ordering does n
      • by tlhIngan ( 30335 )

        Moore's Law has been dead for many years now. We can only expect single digit improvements in CPU performance from now on. Of course, someone will reply with "what about quantum computers?" but those people don't even understand what quantum computers are.

        Moore's Law doesn't say anything about performance. It only applies to transistor density. And transistor density has nothing to do with performance - other than being able to cram more cache into a processor. (The vast majority of transistors in a process

    • by aliquis ( 678370 )

      Reminds me of these clips:
      https://www.youtube.com/watch?... [youtube.com]

      I've seen others too with say booting the machine, launch word processor, type something, save or print and then turning it off again where the much older machine did it quicker.
      https://www.youtube.com/watch?... [youtube.com]

    • sometimes I feel the responsiveness of my windows system running on an i7 is slower than a commodore 64.

      For some operations, it is, because the C64 was doing nothing in the background and it could respond immediately. On the other hand, most of your peripherals have more processing power in them than had the C64, and your computer is capable of feats that could not reasonably be achieved with a million C64s wired together. User experience, indeed.

    • on a base model (64kb) C64? You kids today and your "slow" computers...

      Seriously though, just pop a command window or the text version of emacs up (which is more or less the experience on your C64) and you'll find it plenty responsive. OTOH pull up 20+ browser tabs, run a compile and a video encoder in the background and maybe throw in some crypto currency mining and a bittorrent client for fun and yeah, you might occasionally see some lag on a window change.

      I think we're asking for a bit much from
    • If you thought that your i7 is less responsive than a C64... Then you might actually be correct.
      Have a look at this guy's research on latency and input lag. Granted, some of the machines he has tested on are fairly dated by now, but the general concept remains that a newer machine, while being essentially infinitely faster than a C64 or an Apple 2e, they all generally take longer to put a character on the screen once you've hit a key on the keyboard. Granted, they're also doing a lot more to put that charac

    • Dear companion in old age, are you sure you remember the responsiveness of your C64, or lack thereof? Let's not talk in hyperboles to the point that words no longer have a meaning.
  • Distributions should handle this for everyday users, so they don't have to care. People who accidentally install a kernel should get safe defaults. Defaulting to insecure is wrong.

    • If you accidentally install a kernel then I doubt you care about security. If you care about security, don't run intel, or disable Intel's insecure SMT implementation.
      • If you care about security, don't run intel

        I don't, but this isn't about me. Or at least, it's not about me at home. It doesn't really matter anyway, because basically nobody will ever install this kernel, but the principle stands.

        • Most companies will use this kernel. Although Spectre is bad, it isn't an issue in many workloads. The vast majority of Linux servers wouldn't be affected by Spectre in reality.
          • Most companies will use this kernel.

            I don't think they will. There is supposedly another fix coming soon, so I think they'll just skip it and either bide or revert until the next kernel comes along.

      • by jmccue ( 834797 )

        If you care about security, don't run intel, or disable Intel's insecure SMT implementation.

        Correct, OpenBSD defaults to disabling SMT and they stated some hardware there is no option to disabled SMT. So having an option in Linux to disabled SMT is probably all that is needed. Seems Linux is jumping through hoops to fix hardware issues that seem to never have an end.

        • "So having an option in Linux to disabled SMT is probably all that is needed. "

          You do know that (boot with the noht flag) has existed since hyperthreading support landed, right? Or you can compile a kernel without hyperthreading support.

          • by jmccue ( 834797 )
            No I did not know about 'noht', thanks I have not been paying attention kernel/parms flags since the 1.x days, never needed to compile a kernel since then. :)
    • by aliquis ( 678370 )

      I agree one shouldn't ship in an insecure state (possibly locally like if granting video and audio access require extra permissions / could be locked down then I'm fine with that being usable from the start) but I'm well aware most distributions and operating systems doesn't do so.

      I used to use the BSDs quite often and then installed Fedora or something and for root password I used "ok" because whatever just that the machine ran SSH and allowed remote root logins by default. I guess someone may find that "c

    • intel pay them off to not really slow there chips down

    • by thegarbz ( 1787294 ) on Sunday November 25, 2018 @01:05PM (#57697110)

      People who accidentally install a kernel should get safe defaults. Defaulting to insecure is wrong.

      People need to take an axe to their cable modem and only go on the internet once they have a degree in Computer Science majoring in security and risk assessment.

      I mean I assume you're okay with booting the entire world off that dangerous internet. You've said you're happy with crippling performance for an incredibly low risk that hasn't been shown to be exploited anywhere in the public, so clearly you support throwing out the baby with the bathwater on security right?

      • Re: (Score:2, Flamebait)

        by drinkypoo ( 153816 )

        People who accidentally install a kernel should get safe defaults. Defaulting to insecure is wrong.

        People need to take an axe to their cable modem and only go on the internet once they have a degree in Computer Science majoring in security and risk assessment.

        What? Put down the crack pipe, me laddo.

        I mean I assume you're okay with booting the entire world off that dangerous internet.

        You, and umption.

        You've said you're happy with crippling performance

        at boot time, and by default, but can be disabled

        for an incredibly low risk that hasn't been shown to be exploited anywhere in the public

        Oh, how cute. You're planning only for today. Go ahead, karma, show him what he's won.

        • at boot time, and by default, but can be disabled

          Exactly as I said. People by default should not be allowed on the internet right? I mean it's a security risk that is many orders of magnitude worse than what you are proposing as a default.

          Oh, how cute. You're planning only for today.

          Nope. We're analysing the risk that is presented and the possible ways it could be exploited with a very clear answer. It won't be, not on a desktop computer, because while you're kicking yourself to close security holes I'm sending emails to your mother claiming I'm Microsoft and due to a problem on her computer she sho

    • by gweihir ( 88907 )

      I agree. But Linus is the big-picture guy here, so I can accept that he has overriding concerns. Personally, I do not even have a CPU that does SMT, as it is a pretty bad technology anyways.

  • by Anonymous Coward

    9th gen core i7 losing hyperthreading anyway, Intel realizes that the only way to be safe to to eliminate hyperthreading. The real solution has always been to turn off hyperthreading. So many fixes have already been proving to be flawed and exploited, which is why some are implementing just disabling hyperthreading through the OS.

  • by Anonymous Coward

    Linus made a point here, however, I think even he is wrong.

    Would people be happy working on a 486 architecture that is 100% safe? (let's pretend 100% safe exists for the sake of the argument). I'd rather get things done quickly than spend part of the afternoon to complete just one of the jobs I'm meant to do. Would you spend, say, 10 times more to complete your jobs in a totally safe system as opposed to complete them much faster in a system that is quite safe, but not 100%?

    And now where Linus gets it wrong

    • Um, Linus never said that SMT didn't improve performance. He just said that people who care about that level of security have already disabled it, so why make bother ruining the performance for people who don't care?
      • by gweihir ( 88907 ) on Sunday November 25, 2018 @03:07PM (#57697610)

        Also take into account that this comment is mostly targeted at virtualized server loads, not desktop computing. While there are these scenarios of "JavaScript in the browser steals your keys", the actual threat is "VM1 steals the keys of VM2", for various reasons. My prediction is that SMT will basically be dropped by CPU manufacturers. AMD was never very keen on it and Intel is currently reversing their propaganda on how great it is.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...