Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Linux

Proof-of-Concept Linux Rootkit Leverages GPUs For Stealth 67

itwbennett writes: A team of developers has created a rootkit for Linux systems that uses the processing power and memory of graphics cards instead of CPUs in order to remain hidden. The rootkit, called Jellyfish, is a proof of concept designed to demonstrate that completely running malware on GPUs is a viable option. Such threats could be more sinister than traditional malware programs, according to the Jellyfish developers, in part because there are no tools to analyze GPU malware, they said.
This discussion has been archived. No new comments can be posted.

Proof-of-Concept Linux Rootkit Leverages GPUs For Stealth

Comments Filter:
  • by TWX ( 665546 ) on Friday May 08, 2015 @01:55PM (#49648733)
    ...my Rage XL in my otherwise-headless server, will it?
    • The malware uses OpenCL. Nothing using a fixed graphics pipeline should be able to be infected, since they aren't capable of general purpose computation.
  • by __aabppq7737 ( 3995233 ) on Friday May 08, 2015 @01:59PM (#49648773)
    Recently it was discovered that certain GPUs can be manipulated to create a radio antennae via internal circuitry [securityweek.com]. Combine this with a relatively unmanaged kernel on the GPU to create silent malware and a peer-to-peer radio-communicating botnet
    • by Anonymous Coward on Friday May 08, 2015 @03:45PM (#49649681)

      Here's a post from 10 years ago [hackaday.com] about a program that can turn your display into a radio transmitter. I think that the provided code plays a midi version of Beethoven's Fur Elise, but there are variations that play any mp3.

      Just to be clear: this works by calculating the pixelclock then using high/low (white/black) swings to generate Electro-Magnetic signals at the corresponding frequencies. I remember running this on an old 300MHz PII Thinkpad. It transmitted through the VGA port and could easily be picked up on portable radio 5 feet away without any antenna

      This was a bit of a reverse proof of concept for the NSA's Tempest project. [wikipedia.org] Tempest listened to EM noise and tried to reconstruct what was on the screen (or in the chip / whatever). This demo crafted a display resolution/image such that tones were the EM byproduct

    • recently? http://bellard.org/dvbt/ [bellard.org]

  • More implications (Score:4, Interesting)

    by halivar ( 535827 ) <bfelger&gmail,com> on Friday May 08, 2015 @02:00PM (#49648785)

    If Malware can do it, so can legitimate-ware, perhaps? Emergency tasks can run on cpu-pegged systems, like maybe Windows Task Manager, if they were designed to run on the GPU instead of the CPU?

  • Linux rootkit (Score:3, Insightful)

    by Anonymous Coward on Friday May 08, 2015 @02:05PM (#49648835)
    Really, there's no reason to single this out as Linux. It could be done to any OS. I would imagine the first ones we see in the wild will target Windows.
    • Probably because the lack of Linux GPU drivers. Might as well use it for something.

    • Really, there's no reason to single this out as Linux. It could be done to any OS. I would imagine the first ones we see in the wild will target Windows.

      Are you sure? Maybe it's easier to make rootkits for Linux.

      • Are you sure? Maybe it's easier to make rootkits for Linux.

        ...wouldn't you need a GPU driver that's worth a damn first?

    • Maybe because it's a yawner of a story if Windows gets infected? Everyone sort of expects Windows to be infected with whatever new malware comes along first. Or maybe it's just that the researchers use Linux, so it's more of a "what you know" thing.

      You're right though... No criminal would be stupid enough to release malware for an OS that commands 1.5% of the desktop market share. Real attacks against Linux would much more likely be against the LAMP stack or other server-related services.

      • by Delwin ( 599872 )
        Except for Android...
        • Well, when people talk about "Android" they tend to call it "Android", not "Linux". I'm pretty sure we're talking about PCs in this context, and since we need GPUs, that typically means desktop systems, not headless servers.

          • Unfortunately due to the confusion about this I have to call Linux GNU/Linux It really sucks, because I hate to RMS any credit.

            • Is anyone actually confused that you might be talking about Android when you say "Linux"? I really doubt it. I'm not going to call Linux "GNU/Linux", and neither are most people, because it's just too damned awkward. There's already enough confusion among many non-tech people about the distinction between Linux the kernel and the myriad Linux-based distributions.

              Anyhow, I tend to disagree with Stallman 9 times out of 10, but I don't have any problems giving him props for what he's done and what he believ

        • . No criminal would be stupid enough to release malware for an OS that commands 1.5% of the desktop market share.

          Except for Android...

          Heh. Ever notice that Linux and Android are two distinct entities until somebody mentions marketshare?

          • They may as well be in this case. Unless it's a phablet, tablet, a few high-end phone models, or suchlike? The percentage of Android phones in use that have a GPU on it are going to be fairly slim pickings (at least for now), and the GPUs that are out there aren't exactly powerhouses of computing.

            On the latter I mean this: I could theoretically haul a full-sized sectional couch from point A to point B by strapping it atop someone's Mini Cooper, but I think they're gonna notice once they get on the freeway..

      • Maybe because it's a yawner of a story if Windows gets infected?

        More like it being easier to quantify everything on an open system than one with obfiscated black boxes, a need for researchers to sign NDAs, or a worry that any criticism of security of a closed platform is going to put you into deep shit via the DMCA (eg. Adobe VS Dmitry S. - no bail!).
        They can follow the thread in linux and know what is going on at every step and tell everyone what is going on at each step. With MS there's at least going to

    • Really, there's no reason to single this out as Linux. It could be done to any OS. I would imagine the first ones we see in the wild will target Windows.

      Exactly - any Linux rootkit could easily be done on any OS, while rootkits for other OSs can't be done for Linux.

  • by Anonymous Coward on Friday May 08, 2015 @02:05PM (#49648837)

    Everyone knows there are no working video drivers on Linux!

  • IOMMU (Score:5, Interesting)

    by Anonymous Coward on Friday May 08, 2015 @02:21PM (#49648997)

    There's no mention of IOMMU devices in the article. An IOMMU is like an MMU for the I/O; it remaps the memory access of any DMA device to a different area of physical memory, so that:
    *The DMA device can't misbehave, as in the article
    *A virtual machine can work directly with that DMA hardware device
    *The I/O device can be remapped to a memory region it might not otherwise support (e.g. a 6GB offset, from a 32-bit PCI card)

    But, the article doesn't say anything about IOMMUs. Does an IOMMU help at all against this vector? Does it completely block it, or only make the attacks slightly harder? Do modern computers, which mostly have IOMMUs available, make use of their IOMMUs to mitigate this at all?

    I'd be grateful if anyone knew more about this.

    • I wish I had the points to mod this up since I am also very interested in this question.

      Exploits through hardware with too much access are discussed, from time to time, but I never hear what realistic impact they can have relative to whatever the current state of IOMMU is. I remember this same discussion, with the same missing data, regarding at least FireWire and Thunderbolt, over the past several years, and those don't even require physical access to the machine internals.

    • by Anonymous Coward

      This so call exploit is a vaporware. It require a special kernel module to be loaded ie you need root privilege in the first place to be able to map the kernel memory to the GPU. In all modern GPU it is impossible for a GPU program to alter the GPU page table and thus impossible to map random memory from GPU program. Assuming proper video driver.

      Now you add on top of that the IOMMU and it becomes impossible for the device to access something that have not been mapped inside the IOMMU for the GPU and only th

      • Assuming proper video driver.

        I think "assuming" is the keyword here.

        Now you add on top of that the IOMMU and it becomes impossible for the device to access something that have not been mapped

        IOMMU ? You mean that thing that Nvidia's engineer don't give a damn about because it won't bring them extra 2fps in benchmarks of Crysis 4 ?

        Yup in a theoretical beautiful world, such exploit shouldn't be doable.
        (And indeed, such protection has been added against other similar attacks: DMA attacks aren't possible anymore over Firewire, for example).

        But, in practice:
        - Currently, Nvidia produces closed source drivers only for Linux (except helping a bit Nouveau for Tegra

    • Saddly, the reality is that unlike other similar situation (it's not possible to do DMA attack on firewire anymore), IOMMU aren't yet much used by the binary proprietary drivers.
      Nvidia is mostly interested in having good performance. They won't lose resources with things that won't directly influence benchmark results.
      AMD is completely understaffed, so don't count on them neither (at least not before they finish the transition to AMDGPU kernel drivers, and until the opensource community add proper IOMMU sup

    • by LoRdTAW ( 99712 )

      The IOMMU does take care of the DMA problem but I am betting this has something to do with how the GPU kernel talks to the OS kernel. The OS controls memory so perhaps some driver exploit fools the kernel into reading the wrong memory. The GPU says hey I need this memory and since the GPU driver lives in kernel space, it's possible it could randomly read protected memory.

  • My understanding of GPU coding environment (not as a programmer, thank God, I just listened to job applicants and their presentations) is that, it is quite limited, almost all interpreted code. Some strange combination of C like code being written and then passed to renderers and shaders. It gets kind of "compiled" in place and gets executed. The binary code obfuscation that could be done in a plain regular chipset is generally not possible for GPU. So the source code of the malware itself might be visible
    • My understanding of GPU coding environment (not as a programmer, thank God, I just listened to job applicants and their presentations) is that, it is quite limited, almost all interpreted code. Some strange combination of C like code being written and then passed to renderers and shaders. It gets kind of "compiled" in place and gets executed.

      Yep. Usually at application startup, shaders are compiled. Later in your renderer code you bind that shader just before you want to draw something using it.

      On the other hand, these GPU computing codes are so damned complex they might not need additional obfuscation.

      Not necessarily. Shaders can be very simple as well. They usually range from some tens to some hundreds of lines. Check out Shadertoy [shadertoy.com] to see some.

    • As I understand it (which is also rather little), the "interpretation" you are talking about is just for portability. When the application starts up, it asks the driver to compile it for whatever the hardware is and then the application copies that to the video memory for execution. There are limitations around what that can do, but only because of limitations of the GPU instruction set (not sure of the current state but they historically avoided things like loops, etc, since they never wanted it to be po

    • by DrYak ( 748999 )

      Some strange combination of C like code being written and then passed to renderers and shaders. It gets kind of "compiled" in place and gets executed.

      The thing is OpenCL also uses a subset of C.
      That subset does include pointers (just like CUDA, btw).
      Which means that a piece of code can read any random memory location. (And the hardware works so under the hood).
      You need to do extensive code analysis and tracking to check wheter the memory location being read should legitimately be so.
      (Valgrind, address sanitizer, etc.)

      So without proper memory protection management (the IOMMU other are discussion about), even in the simple C-like language accessible to Ope

  • Pointless (Score:2, Informative)

    by Anonymous Coward

    This is newsworthy? All it does is hide the original syscall pointers in the GPU. The hook code still needs to be visible to the CPU. pointless/10. 1995 called and their rootkits back.

  • You have to have OpenCL drivers installed and up and running for this to affect you. So this will only affect a fraction of linux installations.
  • They get really pissy when you mess with their GPU backdoors

  • by LightningTH ( 151451 ) on Saturday May 09, 2015 @12:45AM (#49651949)

    Please correct me if I am wrong.

    I looked through the code, it is not doing any form of hooking of system calls into the GPU to be directly called without it being known. What it does do is call out to the GPU to xor specific strings into a buffer stores on the GPU to hide it's log file. As far as I can tell, the syscall[x].syscall_func is simply a pointer to a GPU function to call which only has access to GPU memory. this is why in each of the "hooked" functions it has to transfer the data to be handled into the GPU. I don't see anything where direct memory access of the CPU is occurring by the GPU without there being a transfer from CPU to GPU.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...