Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Privacy Security Software Linux

Top Linux Developer On Intel Chip Security Problems: 'They're Not Going Away.' (zdnet.com) 87

During his Open Source Summit Europe keynote speech, Greg Kroah-Hartman, the stable Linux kernel maintainer, said Intel CPU's security problems "are going to be with us for a very long time" and are "not going away." He added: "They're all CPU bugs, in some ways they're all the same problem," but each has to be solved in its own way. "MDS, RDDL, Fallout, Zombieland: They're all variants of the same basic problem." ZDNet reports: And they're all potentially deadly for your security: "RIDL and Zombieload, for example, can steal data across applications, virtual machines, even secure enclaves. The last is really funny, because [Intel Software Guard Extensions (SGX)] is what supposed to be secure inside Intel ships" [but, it turns out it's] really porous. You can see right through this thing." To fix each problem as it pops up, you must patch both your Linux kernel and your CPU's BIOS and microcode. This is not a Linux problem; any operating system faces the same problem.

OpenBSD, a BSD Unix devoted to security first and foremost, Kroah-Hartman freely admits was the first to come up with what's currently the best answer for this class of security holes: Turn Intel's simultaneous multithreading (SMT) off and deal with the performance hit. Linux has adopted this method. But it's not enough. You must secure the operating system as each new way to exploit hyper-threading appears. For Linux, that means flushing the CPU buffers every time there's a context switch (e.g. when the CPU stops running one VM and starts another). You can probably guess what the trouble is. Each buffer flush takes a lot of time, and the more VMs, containers, whatever, you're running, the more time you lose.
"The bad part of this is that you now must choose: Performance or security. And that is not a good option," Kroah-Hartman said. He added: "If you are not using a supported Linux distribution kernel or a stable/long term kernel, you have an insecure system."
This discussion has been archived. No new comments can be posted.

Top Linux Developer On Intel Chip Security Problems: 'They're Not Going Away.'

Comments Filter:
  • by rlwinm ( 6158720 ) on Monday October 28, 2019 @09:34PM (#59356792)
    These are mostly timing attacks which maybe an issue if something is virtualized and not all components can be trusted. But for a more traditional deployment where a server runs a set of processes dedicated to an application it's less vital. Sure it's still a risk - but I tend to live under the assumption that any code executed on a box by an attacker compromises the whole box (and sometimes even taints the hardware [in some cases]).

    So it's best to stick to the same kinds of practices the OpenBSD folk have been doing for years: good controls on code quality, audits, and turning off anything unnecessary, limiting feature sets (sorry managers), etc.
    • Is there a proof of concept attack somewhere? and if there is, can it be performed from JavaScript? Any attack without a PoC is not an attack.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Spectre, Meltdown, et al are all bullshit. The chances of any attacker not only getting into a hardened network, hardened computers on that network and somehow gleaning important information at 10 bytes per second in a huge, encrypted memory environment are basically impossible. This is an excuse for the hardware and software makers get people to give them more money.

      Technically, they aren't even bugs, they are the way that modern microprocessors are INTENDED to work. For the tinfoil hatters, maybe they sho

      • Well if your conspiracy is that Intel is doing this intentionally then I think it backfired for them. I've bought AMD since my first homebuilt, but a few years ago I finally succumbed to the Intel hype and AMD's shit performance and switched over. Not even a year later this all drops and they start furiously patching my OS and neutering the performance I spent that extra cash on. This starts happening right as AMD finally starts offering decent chips again. Intel was already on thin ice with ARM disrupting
        • I'm glad I waited the extra couple years to do my new build. I was so close to buying Intel after dealing with shitty Phenoms for years. But now my Ryzen 3800x is running smooth as butter, and the 3000 series are immune to all the speculative execution attacks. I made sure to buy ECC memory to take advantage of that as well. I still have avoided buying any Intel CPUs. I did break down and get an Intel SSD once, but I'm using Samsungs in my new build.

          My only complaint is a lot of the Ryzen 3000-compatible mo

          • Dude. Asus AMD boards are worst of the breed. Only 4 vrms that get 100c hot! It won't last.

            For the 3800x and 3900x you need the expensive top tier boards with x570 and not the Bs. Gigabyte master is top rated with the most horrible bios. MSI Ace is 2nd place and I own the Intel version of the board. Don't get the Godlike as it is $$$$ but it's not cheaper ACE Meg has same vrms. Just no liquid nitro support for overclocking or 3 nvme cooler covers for raid 0. You still can do raid but only one hascmetal plat

          • First, dont skimp on the motherboard, unless you are already skimping hard on the cpu.

            Second, low end ASUS for AMD is particularly bad. If you are spending under $300 then MSI is a great choice with ASRock as the runner up.
        • When did this happen? AMD chips are faster than Intel already. AMD shit canned the horrible Bulldozer architecture and hired Apple/Alpha chip designer to create Ryzen.

          AMD demolishes the i9 https://www.trustedreviews.com... [trustedreviews.com] even on single core benchmarks

        • Unfortunately I don't think one series of successful AMD processors is going to convince corporations to change right away. That includes the IT department I currently head. There are a few reasons for that:
          - Past reputation of AMD processors is meh! Nobody was going to hang their reputation for $100 - 200 saving per workstation
          - Because Intel was the go to, there was/is a larger hardware offering to satisfy wider spectrum of needs.
          - Lots of software maker build their application around Intel processors, h

      • by Megol ( 3135005 )

        Modern processors aren't designed to leak information - that they do is unavoidable* but not something designed for. Thinking they are are clear indication of either not knowing shit or being the tinfoil hatter...

        (* realistic shared resources always leak some information however most leaks are so noisy they aren't a problem, the new generation of attacks aren't noisy enough)

      • by fintux ( 798480 ) on Tuesday October 29, 2019 @07:39AM (#59357766)

        A huge portion of services is running on cloud-based virtual computers (like on Amazon Web Services or Microsoft Azure). You don't need to hack to get on a server, you can simply buy a slice of computing power from the said service providers. And because these flaws make it possible to get data across virtual servers, this is a problem. At 10 bytes per second, you can get 315 MB per year in a single instance, and one can easily replicate that over a huge number of servers. That's a pretty high possibility to get something of significance, like cryptographic keys - especially since the important pieces of data are often such that they're loaded in memory, and often in multiple copies.

        The issue with this kind of side channel attacks is that it often doesn't need any other vulnerability to be able to exploit these, and they are able to cross so many security borders. You don't need root access or anything such, it is enough to be able to run some code in some sort of environment (even JavaScript that has been sandboxed) to exploit these vulnerabilities. Sure, there are situations where these don't matter - like a non-virtual server only running trusted code, but there are also even more situations where these do matter.

    • by K. S. Kyosuke ( 729550 ) on Tuesday October 29, 2019 @01:39AM (#59357318)
      You mean it's yet another reason to buy your own hardware and *not* run in the cloud, in addition to all the reasons we've already had.
    • by rgmoore ( 133276 )

      These types of attacks are about more than just servers running on the cloud. They're also potentially attacks against web browsers running multiple tabs. Malicious Javascript in one tab could steal your banking information from another tab, even if the Javascript is run in a sandbox and each tab is run in a separate process.

      • by vyvepe ( 809573 )
        Javascript attacks are easily and cheaply defeated by lowering the precision and increasing the jitter of timers available from javascript API. Javascript is not a problem with a good browser.
        • by fintux ( 798480 )
          Increasing the jitter doesn't defeat the attacks, it just makes them slower. It adds noise to the measurements, but noise can be dealt with by just taking more samples and filtering the data (like median stacking or averaging).
          • by vyvepe ( 809573 )
            Agreed. You can filter out jitter. But you need more and more measurements if you have more jitter at more and more frequencies. With enough noise the attack will not be practical. The point is that you have only limited resources of the target computer. You cannot arbitrarily increase number of samples/measurements how LIGO does measure subatomic distance variations by ramping up the power (among other things).
          • by gweihir ( 88907 )

            Indeed. The goal here would be to, say, making a 128 bit key take > 100 years to sniff. That would probably defeat this attack. But getting the numbers right is really hard and would probably run into issues with the attack itself getting better.

        • by gweihir ( 88907 )

          This is actually very hard to get right. You need to make sure the original signal is far enough below the noise-floor and your jitter must be crypto-quality. Even then, the attack just takes longer. It may be possible to make the attack take a few 100 years that way, but this is very hard to assure.

      • Then disable JavaScript. If you do not permit other people's untrustworthy code to execute on your computer, there is no problem.

        • by fintux ( 798480 )

          Have you actually tried browsing (or better said, using, as a lot of it is interaction and not just about looking at stuff) the Internet without JavaScript recently?

    • by gweihir ( 88907 )

      Not overblown at all. As soon as somebody publishes a reliable exploit, anybody can attack these vulnerabilities on the cheap. Your set-up may still be secure (mine is likely too), but anything in clouds is at risk and that is a lot.

  • by AHuxley ( 892839 ) on Monday October 28, 2019 @09:39PM (#59356798) Journal
    Test and see if thats a better CPU..
    If yes, buy it and support that product :)
    Choose a different brand.
    • Yeah, golly, it sounded like their "two options" were leaving out something!

    • by AmiMoJo ( 196126 ) on Tuesday October 29, 2019 @05:29AM (#59357588) Homepage Journal

      AMD CPUs are the best choice for most people right now. A few Intel models can get a few percent better performance on the odd game or benchmark, but also cost a lot more. Unless your primary interest is topping benchmark league tables you should get a Ryzen.

    • by GuB-42 ( 2483988 )

      I'd like to hear from AMD about these issues. They keep saying they are secure but are they?

      The real big thing is that AMD CPUs are not vulnerable to Meltdown, the worst of these attacks, but ultimately that's just one of many, and an easier one to fix. And indeed Intel fixed it in its latest CPUs.

      But AMD doesn't do magic, they also use speculative execution, caches, and therefore are susceptible to timing attacks. So, generally, everything that applies to Intel applies to AMD (and ARM) too, even if some sp

      • AMD uses secure instructions for their SMT. Internally each thread has an I'd and is partitioned in hardware on the chip. Intel has no concept as it was designed in the 1990s. AMD V has security built in as well which Intel- VT does not.

        Intel will need to redo these from scratch which is difficult as it would break comparability.

        Intel never updated their shit from when Windows98 was Kong where everything was local admin and it shows

      • by gweihir ( 88907 )

        AMD has different branch-prediction. The trick in the Spectre attacks is to train the branch predictor to make the wrong jump at a specific point so the CPU speculatively executes to some place it is not supposed to. For the Intel branch-predictor, this is apparently pretty easy and the resulting attack is still hard to do. For the AMD branch-predictor, this is apparently pretty hard and may just make the attack overall infeasible on AMD.

        Hence it is unknown whether this works in practice on AMD. It is known

  • Comment removed based on user account deletion
    • by Rockoon ( 1252108 ) on Monday October 28, 2019 @10:40PM (#59356958)
      ..and for those unaware.

      In the fabrication world it takes about 5 years between initial design and production. For instance AMD's chiplette designs started ~8 years ago now. Being generous to Intel, lets suppose they began a redesign to fix their security nightmare early last year, then they are still about 3 years away from producing a chip without the flaws.

      Dropping a bit of the generosity, Intel also has to contend with skipping 10nm and coming up with their own chiplette designs (they cant go back to the Pentium 4 "dual core" chiplette design because that also brings back everything else of their horribly performing netburst era.) The only time Intel has been able to race a design to production (~2 years) was the Core 2 series, because that was just reverting back to the Pentium 3 design and upgrading their hyperthreading to a full second core.

      I'm been saying this for years here: Sell your Intel stock. Make sure your 401K's and Roths arent anywhere near Intel. Intel is completely fucked, and Intel has known it at least since their first round of massive layoffs and their announcement of a new "cloud strategy."

      This isnt like a retail store that can go under overnight. Its slow motion. The killing blow has already been dealt by the rent-a-fabs. Intel is ragdolling into the gutter.
      • Cloud strategy is exactly where these bugs bite them, too.

      • Sadly, enterprises are still buying Intel. They are probably getting a discount, but Intel has inertia. They have long been shady, and a worse deal than AMD, but that hasn't stopped them before, why should it stop them now. Lots of people around here are willing to make excuses for Intel about how these exploits don't matter because they are difficult to execute. Intel can sell that same story to probably the majority of high dollar decision makers. If they were intelligent, discerning, or scrupulous, they

        • I can't help but wonder if these bugs were perhaps intentionally created for the NSA. Intel's chips have long been considered a key part of national security and the company has always had a close relationship with the US government. This is probably part of why Intel has always been so arrogant (I once heard an Intel vice president respond to a question after giving a talk, saying "We don't give a fuck, we're Intel.") There has been concern that the NSA may be working with US manufacturers to include ba
          • The fact that some ARM chips have similar bugs means that probably NSA has nothing to do with this.
            More likely is that the easy solution to the problem of speculative memory access is just wrong from a side channel standpoint.
          • The TPM chips - Intel Management Engine, and AMD's version fTPM - are where the backdoors are installed. Notice that both manufacturers started putting it on all their boards at the same time. Someone went to both companies, and made them an offer they couldn't refuse.

            In the late 90s, Congress tried passing a law mandating that all computers include a TPM. That proved to be unpopular. When it couldn't be done legitimately - by vote - they went to the manufacturers in secret. By 2005 you could not buy a des

        • >"Sadly, enterprises are still buying Intel. "

          They don't have a whole lot of choice, because the high-end "enterprise" equipment they need has all been based on Intel by the big players that can support it (HP, Dell, etc). But that is likely to change now. They can't afford to keep ignoring AMD, and customers are likely to start demanding additional options.

        • Lots of VPS hosts, data centers, etc. are still buying Intel. And that's where you'd think AMD would shine - excessive amounts of cores, and not much gaming or multimedia workload. Intel's advantage in the cloud/virtualization space disappeared as soon as Spectre and Meltdown came out. Whether it catches up to them remain to be seen

        • Intel just works. AMD is less reliable. Chipsets, internal qa, support from vendors, etc. AMD is starting to work on this but no one wants a server that has reliability problems or weird chipset quirks which AMD outsourced design to ASmedia to cut costs.

          It will be years before AMD can make inroads in the data center again

      • I wonder how long Intel has known about this problem.
        • by geek ( 5680 )

          I wonder how long Intel has known about this problem.

          They've known it was possible from the design stage. They just hid their heads in the sand. Intel has been taking risks like this for performance reasons, it's how they've maintained their advantage of AMD who was doing things right.

      • by gweihir ( 88907 )

        Pretty much this. For these 3 years, they have nothing and when they have something, it will be slow and have other issues, as it was designed in haste. At the same time AMD will have its 3rd generation of a much better design out.

        Intel has gigantic financial reserves, so they may survive this, but this is in no way assured.

    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday October 28, 2019 @10:43PM (#59356972) Homepage Journal

      Wrong: POWER actually does have the same problem. It's often overlooked that IBM pulled exactly the same shady shit as Intel. At least POWER7 through POWER9 are vulnerable to MELTDOWN.

      AMD has come out of this looking like a goddamned hero.

      • the same shady shit as Intel.

        The fact that so many chip makers have this problem makes me think that its less about shady dealings and more the common issue that engineers and developers don't consider security. Putting security first is hard and a new paradigm. For tech professionals who have spent their entire careers working through different models it can be hard to adapt to. I think this just shows how pervasive the mindset is in the industry and hardware design will require the same overhauls that software design did to account f

        • Security is part of the job. The Intel chips don't check security after the access by accident, that decision can only have been made consciously. Whoever made the decision probably thought it would be good enough, but pretending that the security features of the CPU were meaningful in certain ways in which they knowingly didn't make them was fraud.

          • by Agripa ( 139780 )

            Security is part of the job. The Intel chips don't check security after the access by accident, that decision can only have been made consciously. Whoever made the decision probably thought it would be good enough, but pretending that the security features of the CPU were meaningful in certain ways in which they knowingly didn't make them was fraud.

            Intel's vulnerability is a side effect of their design which checks for exceptions during instruction retirement which simplifies exception handling. Unfortunately however this means the state of cache and buffers is altered by bad speculation.

  • System Sharing (Score:5, Insightful)

    by TechyImmigrant ( 175943 ) on Monday October 28, 2019 @10:49PM (#59356996) Homepage Journal

    All these problems arise from code running for different users sharing the same hardware.
    Own or rent the whole box and run only the software that serves your interests and cross core/cross thread/cross VM attacks are moot. Your adversary is on a different box, unless you really messed up.

    • You hope your adversary is on a different box. If you are renting servers from Amazon, how will you know who is sharing your CPU?
    • by piojo ( 995934 )

      What about running an untrusted application in a sandbox? Is that not valid?

      • by rgmoore ( 133276 )

        No, sandboxing is not enough; a major problem with these vulnerabilities is that they allow sandboxed applications to extract data from other processes running on the same machine. The basic problem is that these are hardware flaws, so they're very hard to mitigate at the software level. You can probably do it by doing things like flushing the cache when switching between programs, but that undermines a lot of the performance advantages the clever hardware was supposed to achieve.

        • by gweihir ( 88907 )

          Indeed. And that is the whole problem here and that is also why Intel is so desperate to gloss over this thing. You really only have a choice between security or performance on Intel and there will be stretches were you do not get that security even if you chose it.

        • by piojo ( 995934 )

          I didn't mean to say sandboxing would fix the problem. I was replying to TechyImmigrant's claim that running untrusted processes on the same system is somehow wrong, and the implication that these problems are our fault when they occur. That's nonsense.

    • Re:System Sharing (Score:4, Insightful)

      by fph il quozientatore ( 971015 ) on Tuesday October 29, 2019 @03:37AM (#59357444)
      You know some of these bugs can be exploited by malicious javascript, right?
      • Re:System Sharing (Score:4, Interesting)

        by vyvepe ( 809573 ) on Tuesday October 29, 2019 @06:31AM (#59357664)
        No, they cannot if you have recent enough browser (javascript interpreter). These attacks can be easily mitigated by lowering precision and adding some jitter to timing functions available in javascript. If you are way too paranoid you can detect and kill scripts which are trying to measure time on their own (e.g. tight loops incrementing a counter). But it is enough to introduce random jitter to javascript interpreter to mitigate that. That being said you should still use something like uMatrix to block javascript selectively. There can be still other errors in APIs available from javascript which can be used by an attacker.
    • by gweihir ( 88907 )

      Unfortunately, web-servers are running code on your machine too these days, and do so in what looks like pretty ordinary web-surfing. The only way to be really secure is to use a dedicated machine for web-surfing on which you do not crypto and have no sensitive information. Also, singel-tab surfing whenever you log in to something may become necessary eventually, depending on how reliable and powerful the eventually available exploit code will be.

      All this is so much hassle, that moving to AMD (not immune, b

      • by vyvepe ( 809573 )

        The critical difference is that web-servers will run only interpreted script on your machine by default. You can mitigate the attacks by adding jitter to the time measurements available from javascript.

        Using ad-block, noscript or uMatrix is a good idea. Getting an AMD processor is a good idea too.

        • by gweihir ( 88907 )

          The critical difference is that web-servers will run only interpreted script on your machine by default. You can mitigate the attacks by adding jitter to the time measurements available from javascript.

          That is tricky to get right. It would be nice if that solves the issue. But if it only reduces the signal, the attacker just needs a bit longer. Hence, with a thorough mathematical analysis and a secure implementation ("crypto-quality" jitter) I would welcome such an approach.

          Agree to the rest of your statement.

          • GPS SA gets this right. The offset is random but varying at a much lower frequency than the signal of interest and the mean takes a long time to discern and tells you nothing about the current offset. To get around it you need a fixed reference (like differential GPS does).

            For crypto purposes, this is harder. Trying to suppress a signal with noise is a loser's game.
            Better still to use modes of operation that makes iterative inference impossible, such as changing the key and data every iteration through a bl

    • by AmiMoJo ( 196126 )

      Problem is renting a box doesn't scale. If you website suddenly gets Slashdotted then your box melts and the site goes down.

      That's why everything is moving to VM instances in the cloud. When traffic goes up the system automatically spins up some new instances on its pool of servers and keeps the service responsive. You don't have to pay to run all those servers 24/7 just waiting for the next hit.

    • by SIGBUS ( 8236 )

      Just pray that you don't fall victim to some zero-day RCE in a service you run.

      • >Just pray that you don't fall victim to some zero-day RCE in a service you run.

        What makes you think everyone uses cloud computing to run services?

        A lot is to do compute tasks on large numbers of machines simultaneously. I've done it. A month to compute on a fast PC. $100 gets it done in a few hours. A bargain when you have a product to design. Write your code to cope with 2+ machines and you can easily dump it on 1000 cloud machines if you need to.

  • How many VMs/containers do people typically run on a single server? In my (admittedly limited) experience, it generally is somewhere around the number of processor cores.

    If this is the case, then one solution would be to dedicate a CPU to each VM or container. Excess CPUs can be allocated dynamically (with performance hits). This obviously eliminates a lot of flexibility, but would eliminate nearly all Intel security issues.

    • Hahaha. More like 16 VMs to a CPU for windows desktops, and 8-20 per CPU for servers. More at cheaper levels.

    • You do realize what it would do to performance to force assign a physical CPU to each VM right?

      In all the hypervisors the virtual CPU and physical cpu are completely untethered. None of the virtual machine systems are designed to tether a VM to a physical CPU, though in some you can, but the performance hit to do so would be pretty significant. And it's not just the CPU's, it's the memory too. To tether the VM to a physical cpu and specific memory space would be devastating to performance, you'd be better

  • So if you alter the OS to flush buffers every time you change processes, much of the style that has zillions of processes at a time gets disfavored. It might make sense to have many fewer processes, and perhaps more applications that use threads, where everything in them is, in theory at least, inside one security boundary. However many pieces of code pull in functions from Lord knows where (and He isn't telling us, let alone the users!) which sometimes do nefarious things. Having them live in one address s

What this country needs is a good five dollar plasma weapon.

Working...