Linux Kernel Back-Door Hack Attempt Discovered 687
An anonymous reader writes "The BitKeeper to CVS gateway was apparently hacked in an attempt to add a root exploit back door to the Linux kernel, according to the linux-kernel archive. The change was in the file kernel/exit.c and changed the user ID of a process to root under the guise of checking the validity of some flags. The core Linux BitKeeper kernel repository was not at risk, and in fact it was the BitKeeper CVS export scripts that detected the unauthorized modifications to CVS. The changes were falsely attributed in CVS to long-time Linux developer davem (David Miller). Users of the BKCVS repository should resync their trees to remove the offending code if they had replicated it since yesterday."
Well well (Score:5, Insightful)
Re:Well well (Score:1, Insightful)
Most companies use pretty good CVS-like tools.
Yet another reason to use open source software (Score:3, Insightful)
more reason to sign patches? (Score:5, Insightful)
That way people who hack in won't be able to send in signed patches to the system [e.g. even if they physicially update the tree others can trivially spot the unsigned patches].
That would of course, require people to actually think about security in terms of "oh sure people won't hack it because it hasn't been done...much...before."
Tom
I'm impressed (Score:0, Insightful)
Re:Well well (Score:4, Insightful)
Re:more reason to sign patches? (Score:1, Insightful)
Re:Yet another reason to use open source software (Score:3, Insightful)
Had this code come in through proper channels, I wouldn't be so sure that it would've been spotted. Most of the source code trojans people have found in the past were not well hidden, and were turned up relatively shortly. The cases I'm referring to are the trojaned configure scripts, that happened to, I believe, irssi and dsniff, or was it fragroute.. (it was definitely something by Dug Song)
If you would like to tout peer review. Could you provide a valid example ? They probably are out there, but I can't recall any, and this is not what happened here.
So how do we know that there is only one? (Score:3, Insightful)
What if a backdoor was installed last week, or last month, but was not caught?
The fact that this was possible once, should really make people think about the possibility of it happened ALREADY, and determine if it is necessary to hunt through the code for a systematic review.
Instead, all we get is Microsoft Bashing...
Ugh
yet another reason for (CONSTANT == var) (Score:5, Insightful)
if ((options == (__WCLONE|__WALL)) && (current->uid = 0))
In this case, it would make an attempted root hole more visible, as (0 = current->uid) would not compile.
Re:Well well (Score:3, Insightful)
Re:Microsoft (Score:5, Insightful)
The intent was probably that a CVS user get the bad version, work on other stuff, and send the diff (including the bad lines) to a maintainer in an otherwise good patch. However, the BKCVS gateway got confused by someone other than it changing the CVS, and complained, and Larry McVoy pointed out the issue, someone asked what the lines were, and other people figured out what they'd do. Now, of course, if someone had gotten that bit accidentally and submitted it to a maintainer, they'd notice, so the attempt seems to have failed.
Linus pointed out a benefit to using BK: even if the official BK repository were changed, he doesn't pull from it (because his local copy has all of his changes), and he would get an error the next time he pushed to it. The repository that would have to be attacked is actually his local disk, behind a firewall and not set up for anyone else to access at all.
If RMS wants to rant about revision control systems, he'll need to say that CVS needs to be replaced with a more functional alternative (Subversion, perhaps), not BK.
Re:Bad News (Score:5, Insightful)
Yes, everyone who's upset about exploits they haven't heard about, raise their hands...
Re:First of Many? (Score:5, Insightful)
Isn't the pertinent question... was this the first?
Re:yet another reason for (CONSTANT == var) (Score:1, Insightful)
Re:Well well (Score:5, Insightful)
And what if we just haven't discovered the code that got through yet...
You've got to ask - assume nothing.
+5, Tin-foil hat.
No one is mentioning this (Score:0, Insightful)
It was Bitkeeper, the closed source, unfree, anti-community product that caught the problem.
This isn't a triumph of 'many eyes' seeing this bad code in Linux, it was a failure of 'many eyes' not catching the problem in CVS.
Re:Well well (Score:4, Insightful)
Re:more reason to sign patches? (Score:1, Insightful)
All signatures would do is raise the bar a tiny bit and provide a false sense of security. Whoever pulled this off wouldn't be hindered in the least if the bar had been a little higher. At best you would be able to point a finger at the developer responsible for the cert, but why would the perpetrator care about that?
There is no magic bullet for this kind of thing. It's Open Source and the operative word is "open." Only because it's open was this caught. Closed source is even worse.
Re:Daaaammmmmnnnn.. (Score:4, Insightful)
What's the penalty under the law for putting a backdoor in an open-sourced software project?
None.
That's it. That's the list.
Re:So how do we know that there is only one? (Score:2, Insightful)
Your saying that it's bad that some hack get caught before spreading in the wild, and Microsoft is better because at least it let backdoor getting out and nobody can audit for troubles ?
Don't get it...
You are the kind of people that make accidents happen.
You don't point the fact that the system worked by preventing a hack. You just say that the fact that there was an attempt means the system is unsafe...
It's kinda biased I think...
Comment removed (Score:5, Insightful)
Re:Curious abot the hack, was it remote? (Score:5, Insightful)
I'm not so sure about that. Personally I would have put the brackets there even in case of a normal test. They might not be necesarry, but I trust brackets more than I would trust my own ability to remember the precedence of every operator in C.
Re:more reason to sign patches? (Score:4, Insightful)
And if you have access to the key, remember that it's encrypted with a passphrase. Assuming it's 40 letters or longer (Something like "This is a passphrase that is long but easy to remember. I would just like to tell you, Mister Password Prompt, that nobody will guess this!"), you would have to try about 100^40 different passphrases. That's hard.
So basically, it's really hard to forge a digital signature. Harder than breaking into the BK server, anyway.
Re:No one is mentioning this (Score:5, Insightful)
The code was injected into a CVS tree, the box could have been compromised in another fashion, such as a wu-ftp hole or some such thing.
So please, don't throw the word exploit around as if you have 1/2 a clue about security. It just makes you look silly to those of us who do.
Re:Daaaammmmmnnnn.. (Score:5, Insightful)
Seriously, though - there are probably many laws by which it would be illegal. The cracker gained unauthorized access to a system and he vandalized data. And the obvious intent was to create a backdoor in many more systems. If they find this guy, he'll be in serious trouble. The guy he pretended to be could probably also sue him for something.
Re:more reason to sign patches? (Score:3, Insightful)
If a developers machine is compromised (as I imagined in my post) getting the passphrase is trivial. How many different ways can you imagine discovering the keyring passphrase on a machine you have the ability to discretely administer?
Just off the top of my head (and no astronomical exponents required):
Man-in-the-middle the keyring editor
Scarf it from memory
Monitor keystrokes
Harder than breaking into the BK server, anyway.
No one broke into a BK server. The BK content is routinely exported to a CVS server so that free software zealots have something to pull from that doesn't involve using BK. The backdoor was done directly on that CVS export (probably by compromising CVS pserver) in the hope that someone who actually uses the CVS server would pick it up and submit the altered file as a patch into the BK tree. Several more things would have had to occur for the backdoor to have "worked": some developer would have had to pick up the altered file, submit that file to Linus, have Linus (and ultimately everyone else) miss the backdoor and commit the change to BK (not easy because, as in the case of the backdoor, using the assignment operator in a condition: if (foo = 0)
Why on God's earth... (Score:2, Insightful)
Re:guide to getting rid of slashdot ads (Score:2, Insightful)
How do you think
Re:Well well (Score:4, Insightful)
They'd find their customers wanting timely patches and accountability.
Re:Why on God's earth... (Score:5, Insightful)
Its separate so they can screen CVS commits carefully.
Must Read: Ken Thompsom's Turing Award Lecture (Score:3, Insightful)
Re:yet another reason for (CONSTANT == var) (Score:2, Insightful)
Re:Trusting Trust (Score:2, Insightful)
Thompson's hack worked because he was the only provider of both Unix and C. Nowadays, that's simply not the case.
Re:Well well (Score:5, Insightful)
And really, it's just more evidence that the Open Source model works. There is really nothing wrong with making a mistake, as long as you learn something from it and share what you learned with other people so they don't have to make the same mistake. Pretending you never make mistakes is another matter entirely
Re:Well well (Score:5, Insightful)
Have you audited your motherboard BIOS? What about your network card - how do you know it doesn't have an IP stack on the ROM that dials home and dumps your network activity to someone? Hubs? Switches? Routers?
Do you really know what lives in your hard drive controller?
Re:yet another reason for (CONSTANT == var) (Score:3, Insightful)
Please tell me any plausable reason why the mistake is easier to see in the second case than the first:
if (foo = 0) {...}
if (foo = x) {...}
Re:Well well (Score:3, Insightful)
Re:Well well (Score:4, Insightful)
1. We know that SCO have been looking very closely at the Linux source code.
2. We also know that none of the Linux boxes which serve major anti-SCO websites have been hacked into.
3. We can deduce therefore that SCO have not found any backdoors in the Linux source code.
While given their general level of (in)competence this doesn't amount to proof that there aren't any, it's probably a fairly safe bet.
And this, dear reader (Score:5, Insightful)
if(variable == CONSTANT) { }
Or the safe version that's so much harder to screw up and which turns out to be just as easy to read with practice:
if(CONSTANT == variable) { }
Do we all understand the real world significance of this now?
If you still want to advocate (variable == CONSTANT), then please feel free to prove that no accidental or abusive (variable = CONSTANT) exist in the kernel.
The list is good reading (Score:2, Insightful)
Re:Well well (Score:3, Insightful)
No, he's a low level heckler ... "Shut up! It's GNU linux." "Shut up! It's GNU linux." (repeat ad nauseum)
Re:Why on God's earth... (Score:1, Insightful)
public BITMOVER version is just published replica
of that one, but it isn't primary.
There are derivatives for people who for some (political) reason don't want to use BK. One of those is BK2CVS, which gets overwritten daily
from primary data source.
That CVS is served publically via standard CVS pserver, which has a number of problems all of its own.
Nobody willing to publish source code from CVS should do it with actual primary repository in r/w mode. That way is clear invitation for trouble!
Re:Well well (Score:3, Insightful)
1) "Boi de piranha":
When you want to make a cattle herd cross a piranha-infested river, you probably will lose many cows (or oxen) -- the trick is get a first ox inside the river for sacrifice; while the piranhas devour it, the rest can pass unharmed.
2) "Porteira que passa um boi, passa uma boiada":
If a gate is wide enough for passing an ox, it's wide enough for an entire herd.
-//-
Have a nice day!
Re:more reason to sign patches? (Score:3, Insightful)
As much as people like to bash it, it would be useful in the case you described.
NO.
While you are right that Palladium-like-hardware or trusted-whatever-like-hardware could be used as you suggest, there is still absolutely NO justification to forbid the owner of the machine from knowing his own master key. And that is the central design feature of Palladium/NGSCB and TCPA and Trusted-Computing.
There is NO POSSIBLE security benefit you can get by NOT KNOWING YOUR OWN KEY. The only possible purpose to forbid people to know their own key is for DRM and user lock-in.
New hardware is fine. Forbiding the owner of the machine from knowing his own master key is purely malicious. Palladium/whatever forbid the owner to know his own master key therefore Palladium and the others are malicious.
-
Defensive coder's bullet-proofing technique (Score:2, Insightful)
An old trick, from my days writing with C compilers under MS-DOS: to force the compiler to catch this problem, put the constant (the zero, in this case) on the left-hand side of the 'equals' sign. C doesn't allow assignments to constants, and even a crude compiler will catch this.
In other words, when you meant
and used a single 'equals' sign, instead of you'll getRe:Well well (Score:2, Insightful)
My experience with proprietary software development is that in the beginning, there is a deadline. Then requirements get added to that deadline and code gets written until the deadline is reached. At which point the code is either shipped as is or the deadline is extended and the cycle begins again.
Show me a company that has the money to pay for developers to sit around and double-check every commit to a source tree. Show me a code review at a company where anyone besides the author has read the code before the meeting. Then I'll believe that THAT company COULD detect a hack like this.
Open source is not perfect, there's probably exploits in the kernel, intentional or otherwise, that have never been found and there always will be. But compared to deadline-driven "not-my-department" development, it's pretty darn impressive. Heck, I'm impressed that anyone ever discovered this.
P.S. Pay attention to the fellow who suggested (in another post) that we check other software for a companion hack for this exploit. Someone clever enough for this back-door undoubtedly has another somewhere to set those flags remotely. Maybe in the next version of CVS or another tool he used to insert the hack that was found.
Re:Wowzers (Score:1, Insightful)
Re:I wonder why not a remote root hack (Score:2, Insightful)
Re:I wonder why not a remote root hack (Score:2, Insightful)
Re:more reason to sign patches? (Score:3, Insightful)
You're really grasping at straws here, but ok...
Absolutely not. I design and build high-security systems for a living, and these sorts of issues are very, very real. When millions of dollars depend on a key's integrity, this stuff *matters*.
For that matter, if there's a way to compromise keys in bulk, the individual keys don't even have to be very valuable to make the attack worthwhile.
Say the paper-key comes sealed from the chip manufacturer.
Now there are a dozen points during the manufacturing process where the key could have been compromised, not to mention many, many hands that could have opened the paper enroute, copied the key and generated a new "sealed" copy to pass along. The processes used to generate, print and mail credit card PIN numbers demonstrates just how hard it is to maintain security in a process like this, and those numbers aren't all that sensitive.
Even if we stipulate that the manufacturer is able to produce sealed documents without any possibility of compromise, how will you, the end user, be able to distinguish between the real thing and a well-executed forgery?
And if the paper-printed key was "stolen" back within the TCPA chip manufacturing plant then it could equally have been stolen within the chip manufacturing plant no matter what.
Now we're getting to the meat of it. Here's the crux of the matter: In a well-designed security system, the crucial keys *never* exist in cleartext outside of secure hardware. In the case of TCPA this is really easy because the chip has the ability to generate its own keys. This means that they can't be stolen at the manufacturing plant without essentially destroying the chip (well, barring side-channel attacks, but it's fairly easy for the manufacturer to protect against those. If you're not familiar with the concept, look it up. If you like security stuff, it's fascinating).
All high-security tokens utilize this same concept of keys that never leave the device, because it's really the only convenient way to maintain security of the keys. For the times when keys *must* leave the device, and cannot be encrypted (for example, during initial key exchange of symmetric keys), good systems still don't let them out in the clear. Instead, we use "secure key splitting" to break the key in to n parts, such that all n are required to reconstruct any information about the key, and those parts are then transferred separately, via separate routes to different people and kept apart until they can be entered into secure hardware, and reassembled there.
At the end of the day, the real bottom line, however, is that it is not only potentially bad to know your key, it's completely unnecessary. Banks use 3DES symmetric keys (called Zone Master Keys) to transport messages worth billions of dollars every day, and I guarantee you that no human has *ever* seen a singe bit of any one of those keys.
No, what you want, as the owner of a key, is to be sure that you can use it and no one else can. Having it stored in a secure device that will never reveal it, but will employ it on your behalf, at your request is the best way to do that.
The problem at hand (that TCPA makes DRM possible), actually has nothing to do with whether or not the owner has a copy of the endorsement key. Your claim that you could use the endorsement key to decrypt DRM-protected data is incorrect, because the endorsement key will never be used to decrypt any data (to use it for decryption would be to destroy its security as an endorsement key. Think about why that is; it's interesting).
Here's how a TCPA chip might be used to implement a DRM system: