TCPA Support in Linux 501
kempokaraterulz writes "Linux Journal is reporting that "The Trusted Computing Platform Alliance has published open specifications for a security chip and related software interfaces.". In the latest Gentoo Newsletter they talk about a possible 'Trusted Gentoo', and possible uses for hardware level security."
Lacking One Thing (Score:5, Interesting)
Physical access to machines is always a big issue in security, and one that is often overlooked. And while it's probably not a big deal for your home machine, consider large companies whose machines could conceivably be targetting for a physical attack to recover the keys directly from the TPM (Trusted Platform Module).
Stajano's "Ubiquitous Computing" book has excellent coverage of the rationale, issues, and complexity of attempting to prevent physical access to chips and devices which store sensitive information. It's an easy read, and well worth it: http://www-lce.eng.cam.ac.uk/~fms27/secubicomp/in
Hardware Security (Score:3, Interesting)
Trusted Linux is ILLEGAL (Score:3, Interesting)
1. Linux is distributed under the GPL (and other licenses).
2. To comply with the GPL, end-users must be able to acquire the source code (which means everything they need to reproduce the binary executble, with or without modifications).
3. If you don't comply with the GPL, you are committing copyright infringement, a federal offense.
But from the other direction:
4. Trusted computing means that all binaries are signed with a secret key.
5. The Trusted CPU will not execute binaries that weren't signed with that key.
6. In this way, it is impossible for end-users to create modified binaries to add/remove features from the software.
The GPL is too much in conflict with Trusted Computing to ever allow them to work correctly together. To obey the GPL, end-users must have access to everything needed to rebuild working binaries- which includes the secret key. But for Trusted Computing to work, it must be impossible for end-users to get the key- otherwise there's no point.
So, Linux or Trusted Computing. Choose one, because you can't have both.
Software DRM (Score:3, Interesting)
The software DRM implementation would be 100% transparent to the application and noone would be the wiser.
It should also be workable with a x86 emulator running a closed source 'trusted' application along with its closed source OS, with the emulator doing the DRM instructions a little differently than normal.
Re:what is it good for? (Score:3, Interesting)
While I have a problem with the uses of this platform that Microsoft no doubt intends, TCPA can be quite useful for making secure systems based on open standards.
One part of these modules is the ability to send keys to the hardware module in a way that cannot be read back out (but with encryption performed using this write-only data). This allows public-key encryption with the private key stored in a very secure way.
Here comes the flood?? (Score:5, Interesting)
The "trusted" boot functions provide the ability to store in Platform Configuration Registers (PCR), hashes of configuration information throughout the boot sequence. Once booted, data (such as symmetric keys for encrypted files) can be "sealed" under a PCR. The sealed data can only be unsealed if the PCR has the same value as at the time of sealing. Thus, if an attempt is made to boot an alternative system, or a virus has backdoored the operating system, the PCR value will not match, and the unseal will fail, thus protecting the data.
At the very least, that sounds like "bye-bye multi-boot systems".
IBM also has a rebuttal to TCPA's detractors [PDF] [ibm.com]. This one talks more about how the TCPA chip as currently designed "not been designed to resist local hardware attack, such as power analysis, RF analysis, or timing analysis." That's all well and good for the moment, and while the chip is (per the PDF) mounted on a presumably-removeable daughterboard, but how about the future? Is this how TCPA will stay, or is it the beginning of our worst fears??
At least these two whitepapers agree with most of us here on one thing -- DRM itself is stupid, for a variety of reasons.
Re:Here comes the flood?? (Score:1, Interesting)
Perhaps Trusted Longhorn SP3 will lock you out due to a different PCR after you install it on a Trusted Longhorn SP2.
Re:Here comes the flood?? (Score:4, Interesting)
And as you point out (re SP3 over SP2) -- what's to stop the OS from refusing to play nice if it doesn't encounter the PCR that it expects to see?? Might you have to provide your PCR when the OS is activated, and then you only get updates if the PCR still matches??
Can, meet worms.
Official Gentoo Dev Response (Score:1, Interesting)
Re:Finally ready for the main stream (Score:4, Interesting)
It has been my understanding that trusted computing equals not DRM automatically. Trusted computing is initially neutral technology: the barriers are built up only after the chip gets to choose a side. You can let Microsoft turn your PC into a DRM environment using TCPA's technology but that's the Microsoftish / {MP,RI,??}AA'ish approach. You can also use TCPA to turn your Linux box into a hardware-reinforced installation of your choice. If TCPA was widespread, you could for example control how the bastard big co. digitally uses, views and copies personal information when you buy something on their website.
Re:Here comes the flood?? (Score:5, Interesting)
This means that the entire security of the boot process hangs on whatever data the CPU feels like sending to the chip for hashing. I could as well make a patch for GRUB that sends the "secure" version of GRUB down the SMbus and actually executes whatever nastiness I have in store.
In the case of DRM this lets me run whatever OS I want. The only thing I have to do is to feed a copy of whatever OS Hollywood trusts to the chip and voila the chip will say I'm legit and Hollywood will give me access to their movies for me to pirate at my leisure. :)
As I see it, the only way to get this to work for real is if Intel steps up and builds TCPA support into the CPU itself such that the PCR register is continuously updated as each instruction is executed. And all existing external chips have to be blacklisted, ofcourse.
Or does the TCPA system have some other trick up their sleeve that makes this work even though it's implemented externally to the CPU?
/greger
Re:Here comes the flood?? (Score:3, Interesting)
And it does make one wonder if a VM that's wise to the TCPA chip might be a solution to the "handcuffed" machine that Alsee (http://slashdot.org/~Alsee [slashdot.org]) often predicts as the end result of TCPA. If the CPU gets involved, perhaps the "freed" OS could run on a second non-TC CPU on an add-on card, sortof like the old way to run Windows on a Mac??
Just throwing out ideas, some of them possibly cracked. Feel free to add glue as needed.
Re:Do we really need it ? (Score:4, Interesting)
Oh, sure. TCPA can protect against OS bugs? Keep on dreaming, must be nice.
TCPA means that signed software can run with full permission. It only stops intentional exploits (programs specifically designed to infringe copyright), not accidental ones (buffer overflows or cross-site scripting).
To block such things, there are many well known techniques that can be applied- priviledge separation, data-tainting, external-error trapping, etc. But all of those can be implemented in sofware alone, without help from TCPA or any other hardware. Conversely, TCPA without those signficiant software changes gives zero benefit.
The only people TCPA might protect is those who put themselves at risk by running slapdash amateur software like Linux and OpenBSD, instead of staying with known quality brands like Microsoft, where security is job N!
PS. Incidently, the flaws in your argument are directly analogous to those in George W. Bush's social security plan. In both cases, to prevent a vague danger, he suggests doing 2 different activities, when really only one of them goes towards solving the difficulty at all- the other just serves his ideological agenda (and is more elaborate and expensive, to boot).
NO SIGNED CODE (Score:3, Interesting)
The truth is that the TCG spec says nothing about signed code. There are no limitations in TCG that keep you from running unsigned code. There is no distinction between "secure" and "insecure" code. You can run anything you like. Signing is a complete red herring in this discussion.
I am not trying to gloss over problems or paint a false picture. The truth is that TCG does have features whose effects are somewhat like what people are worried about with signed code. The result is that TCG could be helpful for DRM, and it might make it impossible to download music from an online store without running a special application, for example. But this would not be because "you can only run signed code". Rather, it is the server that decides whether it wants to talk to you, not your computer deciding what you can and cannot run.
What's the difference? Well, if your main concern is being able to run hacked clients that will allow you to violate your user agreements, then there is no difference. You would be right to oppose Trusted Computing. It will make it harder to lie and pretend to honor an agreement, then break your word and go back on your promise.
But if your main concern is about the GPL and what software you run, there is a big difference. There are no limits on the software you can run. You can hack your Linux kernel to do whatever you want. You can disable "secure" features in the software you run. These privileges don't go away when there is a TPM chip. That should put to rest the concerns about the GPL and hopefully end the discussion about who signs what code.
If you're wondering how these two points of view can be compatible, you need to learn more about the TCG spec and the TPM chip. In a nutshell, the TPM chip, with the cooperation of the BIOS and OS software, takes a hash or fingerprint of the software configuration as the computer boots. It can then report this fingerprint to remote servers, if client software requests it. These reports are signed with an on-chip TPM key that never leaves the chip; and this chip has a certificate from the computer manufacturer, so no emulator can fake these reports (called remote attestations).
That's how it works. It's a lot more complicated than refusing to run unsigned code. What it comes down to is that software can report its configuration in a believable and, yes, trustable way. That's the real reason this is called Trusted Computing, not the lie made up by Ross Anderson. It's Trusted because you can Trust the reports from a remote system about what software it is running, and therefore what it will do.
Re:Do we really need it ? (Score:4, Interesting)
I might want only a limited set of applications accessing a certian storage area.
Imagine a trusted P2P application that will only interconnect with the same trusted application? The trust works both ways. Just like the RIAA thinks they can "trust" their software running my computer to not be of my own creation, or a tampered version of their software, I can "trust" that MY software running on the RIAA's computer is similarly my original code, not tampered with or substituted.
Comment removed (Score:3, Interesting)
You can't do that. (Score:2, Interesting)
Probably an easier way is to have a hacked memory module that lets you change the contents with some kind of hardware interface.
If the memory and all buses in the computer are encrypted, then you're out of luck, but this is not currently in the spec.
Re:As sad as it is (Score:2, Interesting)
It's like what Aragorn said to the soldiers outside of the Black Gates, "I see in your eyes the same fear that would take the heart of me. There will be a day when the courage of men fail, we forsake our friends and break all bonds of fellowship, but it is not this day! An hour of wolves and shattered shields, when the age of men comes crashing down, but it is not this day! On this day we fight! By all that you hold down on this good earth, I bid you stand, men of the West."