Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Security Software Linux

TCPA Support in Linux 501

kempokaraterulz writes "Linux Journal is reporting that "The Trusted Computing Platform Alliance has published open specifications for a security chip and related software interfaces.". In the latest Gentoo Newsletter they talk about a possible 'Trusted Gentoo', and possible uses for hardware level security."
This discussion has been archived. No new comments can be posted.

TCPA Support in Linux

Comments Filter:
  • Lacking One Thing (Score:5, Interesting)

    by SpottedKuh ( 855161 ) on Wednesday February 02, 2005 @01:51PM (#11552286)
    Though the specifications detailed in the article are definately a Good Thing, they lack (at least as far as I could tell) any way of preventing unauthorized physical access to the chip.

    Physical access to machines is always a big issue in security, and one that is often overlooked. And while it's probably not a big deal for your home machine, consider large companies whose machines could conceivably be targetting for a physical attack to recover the keys directly from the TPM (Trusted Platform Module).

    Stajano's "Ubiquitous Computing" book has excellent coverage of the rationale, issues, and complexity of attempting to prevent physical access to chips and devices which store sensitive information. It's an easy read, and well worth it: http://www-lce.eng.cam.ac.uk/~fms27/secubicomp/ind ex.html [cam.ac.uk]
  • Hardware Security (Score:3, Interesting)

    by quadra23 ( 786171 ) on Wednesday February 02, 2005 @01:52PM (#11552296) Journal
    This is indeed good news! Security that is solely-based on software is far easier to compromise than hardware-based (provided that the hardware can't be tampered with by malicious software). Far better to have the security co-ordinated between both. I'd be interested to see how widely accepted this open specification will be.
  • by Minna Kirai ( 624281 ) on Wednesday February 02, 2005 @02:00PM (#11552393)
    It's very simple:
    1. Linux is distributed under the GPL (and other licenses).
    2. To comply with the GPL, end-users must be able to acquire the source code (which means everything they need to reproduce the binary executble, with or without modifications).
    3. If you don't comply with the GPL, you are committing copyright infringement, a federal offense.

    But from the other direction:
    4. Trusted computing means that all binaries are signed with a secret key.
    5. The Trusted CPU will not execute binaries that weren't signed with that key.
    6. In this way, it is impossible for end-users to create modified binaries to add/remove features from the software.

    The GPL is too much in conflict with Trusted Computing to ever allow them to work correctly together. To obey the GPL, end-users must have access to everything needed to rebuild working binaries- which includes the secret key. But for Trusted Computing to work, it must be impossible for end-users to get the key- otherwise there's no point.

    So, Linux or Trusted Computing. Choose one, because you can't have both.
  • Software DRM (Score:3, Interesting)

    by Yartrebo ( 690383 ) on Wednesday February 02, 2005 @02:03PM (#11552439)
    Since the source is available for Linux, what would stop someone from sandboxing 'trusted' software by having the OS validate code before it's executed (slow, though a bit faster than emulation and without all the bugs), and then implenting the DRM hardware (or BIOS) instructions in software in a way that stores the keys (or plaintext information, if that is not doable) and allows access to any software to get the info.

    The software DRM implementation would be 100% transparent to the application and noone would be the wiser.

    It should also be workable with a x86 emulator running a closed source 'trusted' application along with its closed source OS, with the emulator doing the DRM instructions a little differently than normal.
  • by Jennifer E. Elaan ( 463827 ) on Wednesday February 02, 2005 @02:19PM (#11552639) Homepage
    Actually, it *does* include many enhanced cryptography features that cannot be designed entirely in software.

    While I have a problem with the uses of this platform that Microsoft no doubt intends, TCPA can be quite useful for making secure systems based on open standards.

    One part of these modules is the ability to send keys to the hardware module in a way that cannot be read back out (but with encryption performed using this write-only data). This allows public-key encryption with the private key stored in a very secure way.

  • by Reziac ( 43301 ) * on Wednesday February 02, 2005 @02:35PM (#11552910) Homepage Journal
    From TF WhitePaper [PDF] [ibm.com] on IBM's site:

    The "trusted" boot functions provide the ability to store in Platform Configuration Registers (PCR), hashes of configuration information throughout the boot sequence. Once booted, data (such as symmetric keys for encrypted files) can be "sealed" under a PCR. The sealed data can only be unsealed if the PCR has the same value as at the time of sealing. Thus, if an attempt is made to boot an alternative system, or a virus has backdoored the operating system, the PCR value will not match, and the unseal will fail, thus protecting the data.

    At the very least, that sounds like "bye-bye multi-boot systems".

    IBM also has a rebuttal to TCPA's detractors [PDF] [ibm.com]. This one talks more about how the TCPA chip as currently designed "not been designed to resist local hardware attack, such as power analysis, RF analysis, or timing analysis." That's all well and good for the moment, and while the chip is (per the PDF) mounted on a presumably-removeable daughterboard, but how about the future? Is this how TCPA will stay, or is it the beginning of our worst fears??

    At least these two whitepapers agree with most of us here on one thing -- DRM itself is stupid, for a variety of reasons.

  • by Anonymous Coward on Wednesday February 02, 2005 @02:42PM (#11553014)
    Wonder how well the PCR handles kernel upgrades.
    Perhaps Trusted Longhorn SP3 will lock you out due to a different PCR after you install it on a Trusted Longhorn SP2.
  • by Reziac ( 43301 ) * on Wednesday February 02, 2005 @02:52PM (#11553122) Homepage Journal
    That's a good point. What's the difference between a "different" OS and an "upgraded" OS? A: Nothing -- either way, it still won't match the "original" OS.

    And as you point out (re SP3 over SP2) -- what's to stop the OS from refusing to play nice if it doesn't encounter the PCR that it expects to see?? Might you have to provide your PCR when the OS is activated, and then you only get updates if the PCR still matches??

    Can, meet worms.

  • by codergeek42 ( 792304 ) <peter@thecodergeek.com> on Wednesday February 02, 2005 @02:58PM (#11553205) Homepage Journal
    link to forums post [gentoo.org]

    There are no formal plans to support this in any shape or form at this time.
  • by yason ( 249474 ) on Wednesday February 02, 2005 @03:21PM (#11553553) Journal
    It really makes me happy to see that Linux distributers are finally seeing the light and providing the community with things we need in an Operating System. Hopefully this will lead to other advances in the wonderful world of DRM.

    It has been my understanding that trusted computing equals not DRM automatically. Trusted computing is initially neutral technology: the barriers are built up only after the chip gets to choose a side. You can let Microsoft turn your PC into a DRM environment using TCPA's technology but that's the Microsoftish / {MP,RI,??}AA'ish approach. You can also use TCPA to turn your Linux box into a hardware-reinforced installation of your choice. If TCPA was widespread, you could for example control how the bastard big co. digitally uses, views and copies personal information when you buy something on their website.

  • by Greger47 ( 516305 ) on Wednesday February 02, 2005 @03:27PM (#11553648)
    This is the thing that I don't get. The supposedly secure boot process seems to be broken from start to finish.
    The "trusted" boot functions provide the ability to store in Platform Configuration Registers (PCR), hashes of configuration information throughout the boot sequence. Once booted, data (such as symmetric keys for encrypted files) can be "sealed" under a PCR. The sealed data can only be unsealed if the PCR has the same value as at the time of sealing. Thus, if an attempt is made to boot an alternative system, or a virus has backdoored the operating system, the PCR value will not match, and the unseal will fail, thus protecting the data.
    The whitepaper also mentions that in IBMs implementation the chip is connected to the SMbus.

    This means that the entire security of the boot process hangs on whatever data the CPU feels like sending to the chip for hashing. I could as well make a patch for GRUB that sends the "secure" version of GRUB down the SMbus and actually executes whatever nastiness I have in store.

    In the case of DRM this lets me run whatever OS I want. The only thing I have to do is to feed a copy of whatever OS Hollywood trusts to the chip and voila the chip will say I'm legit and Hollywood will give me access to their movies for me to pirate at my leisure. :)

    As I see it, the only way to get this to work for real is if Intel steps up and builds TCPA support into the CPU itself such that the PCR register is continuously updated as each instruction is executed. And all existing external chips have to be blacklisted, ofcourse.

    Or does the TCPA system have some other trick up their sleeve that makes this work even though it's implemented externally to the CPU?

    /greger

  • by Reziac ( 43301 ) * on Wednesday February 02, 2005 @03:40PM (#11553817) Homepage Journal
    Good points. Sortof like running VMWare in reverse, eh?

    And it does make one wonder if a VM that's wise to the TCPA chip might be a solution to the "handcuffed" machine that Alsee (http://slashdot.org/~Alsee [slashdot.org]) often predicts as the end result of TCPA. If the CPU gets involved, perhaps the "freed" OS could run on a second non-TC CPU on an add-on card, sortof like the old way to run Windows on a Mac??

    Just throwing out ideas, some of them possibly cracked. Feel free to add glue as needed. :)

  • by Minna Kirai ( 624281 ) on Wednesday February 02, 2005 @04:43PM (#11554610)
    Oh, sure. Linux is perfectly secure, right? Keep on dreaming, it must be nice.

    Oh, sure. TCPA can protect against OS bugs? Keep on dreaming, must be nice.

    TCPA means that signed software can run with full permission. It only stops intentional exploits (programs specifically designed to infringe copyright), not accidental ones (buffer overflows or cross-site scripting).

    To block such things, there are many well known techniques that can be applied- priviledge separation, data-tainting, external-error trapping, etc. But all of those can be implemented in sofware alone, without help from TCPA or any other hardware. Conversely, TCPA without those signficiant software changes gives zero benefit.

    The only people TCPA might protect is those who put themselves at risk by running slapdash amateur software like Linux and OpenBSD, instead of staying with known quality brands like Microsoft, where security is job N!

    PS. Incidently, the flaws in your argument are directly analogous to those in George W. Bush's social security plan. In both cases, to prevent a vague danger, he suggests doing 2 different activities, when really only one of them goes towards solving the difficulty at all- the other just serves his ideological agenda (and is more elaborate and expensive, to boot).
  • NO SIGNED CODE (Score:3, Interesting)

    by SiliconEntity ( 448450 ) * on Wednesday February 02, 2005 @04:47PM (#11554651)
    I want to try to correct one of the most common and universal misconceptions about Trusted Computing: that it will only allow signed code to run. This is causing enormous confusion here, with people arguing about how that works with the GPL, who would get to sign the code, would users get to sign their own code, etc., etc.

    The truth is that the TCG spec says nothing about signed code. There are no limitations in TCG that keep you from running unsigned code. There is no distinction between "secure" and "insecure" code. You can run anything you like. Signing is a complete red herring in this discussion.

    I am not trying to gloss over problems or paint a false picture. The truth is that TCG does have features whose effects are somewhat like what people are worried about with signed code. The result is that TCG could be helpful for DRM, and it might make it impossible to download music from an online store without running a special application, for example. But this would not be because "you can only run signed code". Rather, it is the server that decides whether it wants to talk to you, not your computer deciding what you can and cannot run.

    What's the difference? Well, if your main concern is being able to run hacked clients that will allow you to violate your user agreements, then there is no difference. You would be right to oppose Trusted Computing. It will make it harder to lie and pretend to honor an agreement, then break your word and go back on your promise.

    But if your main concern is about the GPL and what software you run, there is a big difference. There are no limits on the software you can run. You can hack your Linux kernel to do whatever you want. You can disable "secure" features in the software you run. These privileges don't go away when there is a TPM chip. That should put to rest the concerns about the GPL and hopefully end the discussion about who signs what code.

    If you're wondering how these two points of view can be compatible, you need to learn more about the TCG spec and the TPM chip. In a nutshell, the TPM chip, with the cooperation of the BIOS and OS software, takes a hash or fingerprint of the software configuration as the computer boots. It can then report this fingerprint to remote servers, if client software requests it. These reports are signed with an on-chip TPM key that never leaves the chip; and this chip has a certificate from the computer manufacturer, so no emulator can fake these reports (called remote attestations).

    That's how it works. It's a lot more complicated than refusing to run unsigned code. What it comes down to is that software can report its configuration in a believable and, yes, trustable way. That's the real reason this is called Trusted Computing, not the lie made up by Ross Anderson. It's Trusted because you can Trust the reports from a remote system about what software it is running, and therefore what it will do.
  • by Anonym0us Cow Herd ( 231084 ) on Wednesday February 02, 2005 @05:09PM (#11554871)
    And how exactly is this useful to the user? Why would I want to run an application that has its own private storage which can't be accessed by other applications or the OS?

    I might want only a limited set of applications accessing a certian storage area.
    • P2P application
    • XMMS
    Then in a different secured storage area, I only want a limited set of applications accessing....
    • Usenet downloader
    • Pr0n Viewer
    Since I can trust the software within each group, I know that no evil RIAA people will be accessing my sacred secured storage. (Of course, torture may be allowed in the US -- after all -- think of all the poor record executives.)

    Imagine a trusted P2P application that will only interconnect with the same trusted application? The trust works both ways. Just like the RIAA thinks they can "trust" their software running my computer to not be of my own creation, or a tampered version of their software, I can "trust" that MY software running on the RIAA's computer is similarly my original code, not tampered with or substituted.
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Wednesday February 02, 2005 @05:48PM (#11555289)
    Comment removed based on user account deletion
  • You can't do that. (Score:2, Interesting)

    by Kickasso ( 210195 ) on Wednesday February 02, 2005 @05:51PM (#11555333)
    Without hacking hardware at any rate. The TPM verifies BIOS before it starts booting and only enables itself if the BIOS is OK. It won't enable itself *after* the boot sequence, only before. You need to whip up your soldering iron to convince the TPM to do what you want, and even then it's not easy.

    Probably an easier way is to have a hacked memory module that lets you change the contents with some kind of hardware interface.

    If the memory and all buses in the computer are encrypted, then you're out of luck, but this is not currently in the spec.

  • Re:As sad as it is (Score:2, Interesting)

    by randallpowell ( 842587 ) on Wednesday February 02, 2005 @07:33PM (#11556554)
    We can't but we can fight to have it optional.

    It's like what Aragorn said to the soldiers outside of the Black Gates, "I see in your eyes the same fear that would take the heart of me. There will be a day when the courage of men fail, we forsake our friends and break all bonds of fellowship, but it is not this day! An hour of wolves and shattered shields, when the age of men comes crashing down, but it is not this day! On this day we fight! By all that you hold down on this good earth, I bid you stand, men of the West."

"I don't believe in sweeping social change being manifested by one person, unless he has an atomic weapon." -- Howard Chaykin

Working...