Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Security Linux

Kernel.org Attackers Didn't Know What They Had 183

Trailrunner7 writes "The attack that compromised some high-value servers belonging to kernel.org — but not the Linux kernel source code — may have been the work of hackers who simply got lucky and didn't realize the value of the servers that they had gotten their hands on. The attackers made a couple of mistakes that enabled the administrators at kernel.org to discover the breach and stop it before any major damage occurred. First, they used a known Linux rootkit called Phalanx that the admins were able to detect. And second, the attackers set up SSH backdoors on the compromised servers, which the admins also discovered. Had the hackers been specifically targeting the kernel.org servers, the attack probably would've looked quite different." A few blog posts in the wake of the attack have agreed with the initial announcement; while it was embarrassing, the integrity of the kernel source is not in question.
This discussion has been archived. No new comments can be posted.

Kernel.org Attackers Didn't Know What They Had

Comments Filter:
  • by Samantha Wright ( 1324923 ) on Saturday September 03, 2011 @03:27PM (#37297690) Homepage Journal
    Here [bell-labs.com] is what it's referring to. CS graduates are expected to recognize instances of it instinctively.
  • Re:Wishful thinking? (Score:5, Informative)

    by MimeticLie ( 1866406 ) on Saturday September 03, 2011 @03:43PM (#37297802)
    Did you bother to read either of the two [linux-foundation.org] links [blogspot.com] in the summary about that very topic?

    Basically, the nature of Git makes it very unlikely that someone could insert malicious code into the kernel via kernel.org without someone noticing.
  • by Anonymous Coward on Saturday September 03, 2011 @03:43PM (#37297806)

    Because in git, everything has got a hash checksum that is not forgeable (try to get something that compiles, does specifically something malicious, and that has the same checksum and size as before).

    And then there are thousands of copies of the entire thing around. I also have one. A simple comparison for equality will work.

  • by Anonymous Coward on Saturday September 03, 2011 @03:45PM (#37297814)

    Any random machine with SSH enabled gets around 2-3 brute force attempts per day from automated zombies. Most likely kernel.org was breached in an automated attack. I've protected my server with Denyhosts which will add your IP address to the firewall after 5 failed passwords... indefinitely. kernel.org should either install it or switch to public key authentication. Having user/password authentication on Internet without any protection is real stupid.

  • by Dahamma ( 304068 ) on Saturday September 03, 2011 @04:12PM (#37297940)

    2-3? Mine sometimes gets hundreds. It's pretty ridiculous.these days.

  • by jimicus ( 737525 ) on Saturday September 03, 2011 @05:35PM (#37298428)

    I'm not a programmer, but I'm not entirely ignorant either... which leaves me with a question... Assuming that the Kernel was compromised, and the scenario you describe came into being. Isn't it just a matter of examining the Kernel code until you find the naughty bits and expunge them? Or are you basing your nightmare on this infiltration not being detected?

    Not at all - the paper describes how one could write a custom C compiler which would automagically insert malicious code when it saw a particular pattern.

    With a properly crafted attack you couldn't even compile your own "known clean" compiler - the attack takes advantage of the fact that most modern C compilers can't be compiled without an existing C compiler of some sort. Provided the existing compiler is the malicious one, all it needs to do is insert its own malicious payload as part of the compiler compilation process and every subsequent compiler on that system is equally malicious even though the source code is perfectly clean.

  • by beuges ( 613130 ) on Saturday September 03, 2011 @05:36PM (#37298434)

    From what I read, one of the guys with a kernel.org login (HPA, I believe) had his personal machine infected by a trojan. The attackers were then able to login to kernel.org impersonating him. They then used a local-only exploit to get root.

    This is why a local-only exploit is just as bad as a remote exploit. If your machine connects to a network, it has the potential to be compromised by a local-only exploit, by first exploiting a flaw in a completely unrelated program which is accessible remotely. In this case, the "flaw" was the compromised user account. It could have been a buffer overflow in an ftp or web server, which doesn't allow for privilege escalation on its own, but allows arbitrary code to be run as the current user... all the attacker has to do is make that arbitrary code trigger the local-only exploit, and your local-only exploit is now a remote one.

    It's sad that so many people on slashdot keep playing local exploits down, or keep saying things like 'well it doesn't matter if my linux mail program has a flaw - the worst that can happen if I open a dodgy attachment is they wipe out my user directory, the rest of the system is safe'. Nothing is further from the truth. It's harder, yes, but not impossible to chain a bunch of vulnerabilities together so that your local-only exploit becomes remotely accessible.

    This is why Linus doesn't like to classify bugs as security bugs vs other bugs. All bugs are potentially security bugs.

  • by John Hasler ( 414242 ) on Saturday September 03, 2011 @06:25PM (#37298758) Homepage

    Because in git, everything has got a hash checksum that is not forgeable...

    And every commit is signed. When the Debian kernel maintainers fetch a new kernel (from Git, not a tarball) they verify the signature on every new commit. The integrity of the Linux kernel does not depend on anything as brittle as a sacred master copy.

  • by RobbieThe1st ( 1977364 ) on Saturday September 03, 2011 @07:11PM (#37299034)

    He's being funny, in case you can't tell.

  • by dutchwhizzman ( 817898 ) on Sunday September 04, 2011 @01:24AM (#37300838)
    It is bad, but there is a mitigation. It requires two steps in stead of just one to get root access. Given the fact that you usually try to layer your security and have logging/accounting and tripwire type of alarms set up, you have a bigger chance of catching intruders before they get access to anything really dangerous.

    If you admin thousands of systems, used by many more users, you will get compromised accounts, on a fairly regular base. Those accounts in general will be used to try and get root access. By setting up logging, accounting and various other tools, you tend to get a lot of the compromised accounts to trigger an alarm before they get root, or run their code as user. With remote root vulnerabilities, you get none.

    Any privilege escalation is something to be serious about, but crying wolf that local exploits are just as bad as remote, will make less people take you serious.

Houston, Tranquillity Base here. The Eagle has landed. -- Neil Armstrong