Forgot your password?
typodupeerror
Security Software Linux

TCPA Support in Linux 501

Posted by CmdrTaco
from the trust-no-1-or-something dept.
kempokaraterulz writes "Linux Journal is reporting that "The Trusted Computing Platform Alliance has published open specifications for a security chip and related software interfaces.". In the latest Gentoo Newsletter they talk about a possible 'Trusted Gentoo', and possible uses for hardware level security."
This discussion has been archived. No new comments can be posted.

TCPA Support in Linux

Comments Filter:
  • by kaustik (574490) on Wednesday February 02, 2005 @01:41PM (#11552171) Homepage
    It really makes me happy to see that Linux distributers are finally seeing the light and providing the community with things we need in an Operating System. Hopefully this will lead to other advances in the wonderful world of DRM.
    sigh
    • As sad as it is (Score:5, Informative)

      by Anonymous Coward on Wednesday February 02, 2005 @02:05PM (#11552460)
      To have to burst your bubble of uninformed zealotry, there are plenty of good uses for trusted computing and DRM that do no interfere with your quest to get 'fr33 musicz 4 life' or whatever. Not all of this technology is for companies like the RIAA to protect copyrights, despite what Slashbots would have everyone think.
    • by Reziac (43301) * on Wednesday February 02, 2005 @02:35PM (#11552910) Homepage Journal
      From TF WhitePaper [PDF] [ibm.com] on IBM's site:

      The "trusted" boot functions provide the ability to store in Platform Configuration Registers (PCR), hashes of configuration information throughout the boot sequence. Once booted, data (such as symmetric keys for encrypted files) can be "sealed" under a PCR. The sealed data can only be unsealed if the PCR has the same value as at the time of sealing. Thus, if an attempt is made to boot an alternative system, or a virus has backdoored the operating system, the PCR value will not match, and the unseal will fail, thus protecting the data.

      At the very least, that sounds like "bye-bye multi-boot systems".

      IBM also has a rebuttal to TCPA's detractors [PDF] [ibm.com]. This one talks more about how the TCPA chip as currently designed "not been designed to resist local hardware attack, such as power analysis, RF analysis, or timing analysis." That's all well and good for the moment, and while the chip is (per the PDF) mounted on a presumably-removeable daughterboard, but how about the future? Is this how TCPA will stay, or is it the beginning of our worst fears??

      At least these two whitepapers agree with most of us here on one thing -- DRM itself is stupid, for a variety of reasons.

      • Once booted, data (such as symmetric keys for encrypted files) can be "sealed" under a PCR. The sealed data can only be unsealed if the PCR has the same value as at the time of sealing. Thus, if an attempt is made to boot an alternative system, or a virus has backdoored the operating system, the PCR value will not match, and the unseal will fail, thus protecting the data.

        One wonders what forensics types, particulary government forensics types, have to say about this.

        And I mean "have to say about this"

      • by Greger47 (516305) on Wednesday February 02, 2005 @03:27PM (#11553648)
        This is the thing that I don't get. The supposedly secure boot process seems to be broken from start to finish.
        The "trusted" boot functions provide the ability to store in Platform Configuration Registers (PCR), hashes of configuration information throughout the boot sequence. Once booted, data (such as symmetric keys for encrypted files) can be "sealed" under a PCR. The sealed data can only be unsealed if the PCR has the same value as at the time of sealing. Thus, if an attempt is made to boot an alternative system, or a virus has backdoored the operating system, the PCR value will not match, and the unseal will fail, thus protecting the data.
        The whitepaper also mentions that in IBMs implementation the chip is connected to the SMbus.

        This means that the entire security of the boot process hangs on whatever data the CPU feels like sending to the chip for hashing. I could as well make a patch for GRUB that sends the "secure" version of GRUB down the SMbus and actually executes whatever nastiness I have in store.

        In the case of DRM this lets me run whatever OS I want. The only thing I have to do is to feed a copy of whatever OS Hollywood trusts to the chip and voila the chip will say I'm legit and Hollywood will give me access to their movies for me to pirate at my leisure. :)

        As I see it, the only way to get this to work for real is if Intel steps up and builds TCPA support into the CPU itself such that the PCR register is continuously updated as each instruction is executed. And all existing external chips have to be blacklisted, ofcourse.

        Or does the TCPA system have some other trick up their sleeve that makes this work even though it's implemented externally to the CPU?

        /greger

        • by Reziac (43301) *
          Good points. Sortof like running VMWare in reverse, eh?

          And it does make one wonder if a VM that's wise to the TCPA chip might be a solution to the "handcuffed" machine that Alsee (http://slashdot.org/~Alsee [slashdot.org]) often predicts as the end result of TCPA. If the CPU gets involved, perhaps the "freed" OS could run on a second non-TC CPU on an add-on card, sortof like the old way to run Windows on a Mac??

          Just throwing out ideas, some of them possibly cracked. Feel free to add glue as needed. :)
        • by SiliconEntity (448450) * on Wednesday February 02, 2005 @04:55PM (#11554723)
          This means that the entire security of the boot process hangs on whatever data the CPU feels like sending to the chip for hashing. I could as well make a patch for GRUB that sends the "secure" version of GRUB down the SMbus and actually executes whatever nastiness I have in store.

          That's a clever idea, but it doesn't work. The secret is that the trusted boot process uses a concept of "trust extension". We start off with the BIOS. That takes a hash of itself and sends it to the TPM. Then the BIOS will load and run the boot loader. But - and here is the key - before running GRUB, the BIOS take a hash of GRUB and sends it to the TPM. Then it runs GRUB.

          The next step is that GRUB - or at least the TPM enabled version [prosec.rub.de], performs a similar process for the OS kernel. It first takes a hash of the kernel and sends it to the TPM; then it runs the kernel. And the kernel can repeat the process with the various startup scripts and other programs that loads, a la tcgLinux [ibm.com] or the Enforcer [sourceforge.net].

          The key point is that before each component is loaded, it is "measured" (i.e. its hash is reported to the TPM). So you can create a bogus GRUB or a bad kernel, but this fact will show up in the TPM's configuration registers because your bad component got its hash reported before it ran.

          The one exception is the BIOS, but TPM systems are supposed to have restricted BIOS flash capabilities so you can't re-flash the part of the BIOS which does the initial hash of itself. This is part of what they call the Core Root of Trust for Measurement (CRTM) and it is supposed to be inviolable.
        • by Alsee (515537)
          if Intel steps up and builds TCPA support into the CPU itself

          From the Inquirer: [theinquirer.net]
          Improved architecture for Prescott [CPU] includes better pre-fetcher branch prediction, advanced power management, improvements to hyperthreading technology, the PNI above, La Grande support, better imul latency and additional WC buffers. La Grande is the security feature Intel told us about at the last IDF, and includes protection in the CPU, at the platform level, and with software.

          And this story: [my-esm.com]
          Addressing growing se
    • by yason (249474) on Wednesday February 02, 2005 @03:21PM (#11553553) Journal
      It really makes me happy to see that Linux distributers are finally seeing the light and providing the community with things we need in an Operating System. Hopefully this will lead to other advances in the wonderful world of DRM.

      It has been my understanding that trusted computing equals not DRM automatically. Trusted computing is initially neutral technology: the barriers are built up only after the chip gets to choose a side. You can let Microsoft turn your PC into a DRM environment using TCPA's technology but that's the Microsoftish / {MP,RI,??}AA'ish approach. You can also use TCPA to turn your Linux box into a hardware-reinforced installation of your choice. If TCPA was widespread, you could for example control how the bastard big co. digitally uses, views and copies personal information when you buy something on their website.

      • You can also use TCPA to turn your Linux box into a hardware-reinforced installation of your choice.

        If you have the technical brainpower to use TCPA + Linux to build yourself a secure hardware platform, you could also more easily build an equally secure all-software Linux platform.

        The only advantage the TCPA gets from using hardware is it's a big barrier-to-entry for reverse engineers with physical access to the machine: they can't just load it up into an emulator/debugger, they also have to dissect the
  • by PornMaster (749461) on Wednesday February 02, 2005 @01:42PM (#11552174) Homepage
    From a programmer's perspective, the IBM version of the TPM (or TCPA chip) looks like Figure 1. Garrick, please crop the caption out of the figure itself.

    Garrick? Garrick? McFly? McFlyyyyyyyyyy?
  • by CineK (55517) on Wednesday February 02, 2005 @01:44PM (#11552203) Journal
    I mean - there are a lot of hardware security modules that can be used for building trusted systems right now.
    Isn't the only purpose of pushing things like TCPA locking the platform down ?
    • by danheskett (178529) <{danheskett} {at} {gmail.com}> on Wednesday February 02, 2005 @02:10PM (#11552512)
      A locked down platform is very useful for some things.

      One thing TCPA provides that many alternatives do not is a system of sealed storage. In this scheme, an application run under the TCPA feature set can access storage that is guaranteed by hardware to be only accessible by that one application, and no others. This storage is protected by hardware encryption, and cannot be accessed directly, even by the OS. If the application itself or any component is tampered with the sealed storage is inaccessible, since the Nexus, or hardware security manager, recognizes the binary itself as the key to the sealed storage. If that binary is modified, it can no longer access the sealed storage.

      Sealed storage like this is useful in a lot of ways. Combined with a strongly encrypted internet communications a highly secure messaging system could be devised where the encryption was physically end-to-end. Since TCPA provides encryption from the keyboard, to the memory, to the Nexus to the CPU and every point in between, the plain text is only exposed when it is physically being typed - it never exisits in unecrypted digital form.
      • And how exactly is this useful to the user? Why would I want to run an application that has its own private storage which can't be accessed by other applications or the OS? The only argument I can see here is that you don't have to worry about some spyware sending your personal data to someone. But this only makes sense if you're running an insecure OS known for frequent vulnerabilities to spyware and viruses. For the rest of us, this is a non-issue. Why do we need special hardware to make up for the s
        • The only argument I can see here is that you don't have to worry about some spyware sending your personal data to someone. But this only makes sense if you're running an insecure OS known for frequent vulnerabilities to spyware and viruses.

          Oh, sure. Linux is perfectly secure, right? Keep on dreaming, it must be nice.

          Actually, Linux security bugs [slashdot.org] are found all the time. Dan Bernstein had a bunch of undergraduates look and found 44 Linux security bugs [slashdot.org] in just a few weeks. The only reason Linux users are sa
          • by Minna Kirai (624281) on Wednesday February 02, 2005 @04:43PM (#11554610)
            Oh, sure. Linux is perfectly secure, right? Keep on dreaming, it must be nice.

            Oh, sure. TCPA can protect against OS bugs? Keep on dreaming, must be nice.

            TCPA means that signed software can run with full permission. It only stops intentional exploits (programs specifically designed to infringe copyright), not accidental ones (buffer overflows or cross-site scripting).

            To block such things, there are many well known techniques that can be applied- priviledge separation, data-tainting, external-error trapping, etc. But all of those can be implemented in sofware alone, without help from TCPA or any other hardware. Conversely, TCPA without those signficiant software changes gives zero benefit.

            The only people TCPA might protect is those who put themselves at risk by running slapdash amateur software like Linux and OpenBSD, instead of staying with known quality brands like Microsoft, where security is job N!

            PS. Incidently, the flaws in your argument are directly analogous to those in George W. Bush's social security plan. In both cases, to prevent a vague danger, he suggests doing 2 different activities, when really only one of them goes towards solving the difficulty at all- the other just serves his ideological agenda (and is more elaborate and expensive, to boot).
        • by Anonym0us Cow Herd (231084) on Wednesday February 02, 2005 @05:09PM (#11554871)
          And how exactly is this useful to the user? Why would I want to run an application that has its own private storage which can't be accessed by other applications or the OS?

          I might want only a limited set of applications accessing a certian storage area.
          • P2P application
          • XMMS
          Then in a different secured storage area, I only want a limited set of applications accessing....
          • Usenet downloader
          • Pr0n Viewer
          Since I can trust the software within each group, I know that no evil RIAA people will be accessing my sacred secured storage. (Of course, torture may be allowed in the US -- after all -- think of all the poor record executives.)

          Imagine a trusted P2P application that will only interconnect with the same trusted application? The trust works both ways. Just like the RIAA thinks they can "trust" their software running my computer to not be of my own creation, or a tampered version of their software, I can "trust" that MY software running on the RIAA's computer is similarly my original code, not tampered with or substituted.
          • I might want only a limited set of applications accessing a certian storage area.

            You can accomplish all those things in a 100% software implementation of priviledge separation. No special TCPA hardware is needed.

            However, if you did have the special hardware, you would still need modified TCPA-aware applications and OS to make it work.

            So let's consider the two paths towards reaching your goal:
            A) A modified OS that restricts which of your applications are allowed to access which parts of your file system
          • I can "trust" that MY software running on the RIAA's computer is similarly my original code

            No you can't. The RIAA has the money and contracts to give orders to the people holding the keys with which the software was signed. You don't have that level of influence yourself.
    • No, of course we, the users, don't need DRM. It's entire purpose is to take control of a computer away from the user and into the hands of some other entity, who can allow or disallow any given function remotely. In such a system, DRM's role is to ensure that the user cannot regain control.

      This, of course, is enormously usefull for entertainment and software industries. Especially the latter can force any license terms it pleases after DRM becomes required by law (it will - the two industries combined hav

    • by bechthros (714240) on Wednesday February 02, 2005 @03:17PM (#11553487) Homepage Journal
      Seems to me it's a lose-lose situation. On the one hand, until it's hacked, you have users not being able to have their machine do what they want it to do. That's obviously bad thing number one.

      But number two comes a couple years down the road from widespread adoption, when some critical flaw in TCPA is found by hackers, TCPA is hacked, and innocent businesses that have come to depend on it for security are disrupted and exploited. And then we're looking around all doe-eyed, like, "but they said it was unbreakable security, they said it was trusted computing!" TCPA is just antoher level of command heirarchy, and subject to hack.

      "Trusted computing" has got to be one of the most insidious marketing doublespeaks I've ever heard in my life. All "Trusted Computing" consists of is computers who don't trust me.
  • by Anonymous Coward
    The only benefits I can see is increased security for encrypted communication or hard drive encryption. I am really trying to think hard of any other beneficial applications but can't come up with anything.
    • by vadim_t (324782) on Wednesday February 02, 2005 @02:08PM (#11552496) Homepage
      Well, it could be useful for a seriously locked down server.

      Imagine that you're an admin at some big company, with a hundred Linux boxes. You have this stuff on every of those boxes, and a computer for administration somewhere safe. When you install software you first check it, then sign it, then push updates to your servers.

      If somebody gets in, they'll have things quite difficult. Anything unsigned simply won't run at all. Rootkit modules, exploits, etc, will all simply not be able to run at all. This would take out a quite big part of the exploits an attacker could use. Remote ones would hopefully avoided by NX.

      This wouldn't protect against things like races, but it certainly could help quite a lot.

      The situation above is something I wouldn't have any problems with. If an admin wants to have an uber-locked down system where anything not signed by his key that's only present in a computer with no network connection in a secure room with an armored door doesn't run at all, then sure, why not. I'm fairly sure this can mostly be accomplished without hardware support at all, though.

      Now, it's when software publishers want to make it impossible for me to control my computer when I have problems with it. But if the user has full control of it, I think it could come quite handy in some cases.

      • It cannot be accomplished without hardware support.

        If it's implemented in software, then somebody will just hack that software.

        The idea is that every bit executed by the CPU must be signed by a third party, enforced by hardware, with NO WAY TO BE MODIFIED.
        • Why?

          I don't think there's any fundamental problem that makes it impossible to make a VM or emulator that's impossible to break through. If the attacker entered by ssh guessing a password or whatever, and assuming the kernel doesn't have any defects that make bypassing the protection possible, it should work just as well.

          The scenario I presented in the grandparent should work just fine with software, IMHO. Now, the chip is certainly useful if you want to take away control from the owner of the machine.
          • From the way that I understand it, a VM would not help. The chip would have its own crypto key embedded in it (or maybe more than one).

            Let me put it this way... All good encryption algorithms are open -- you can examine the spec and the implementation as much as you want. But you still need the key in order to get the scrambled data out.

            This means that you can examine this chip as much as you want. As long as it does not give up its keys, it should still be secure. Then your computer can be trusted t
      • by Qzukk (229616) on Wednesday February 02, 2005 @02:30PM (#11552820) Journal
        When you install software you first check it, then sign it, then push updates to your servers.

        In the end, it depends on who gets to sign the software, and how this software is distributed once signed. In our corner of the court, we have the admin signing software for 100 boxes (does he have to sign each separately? Can you sign software for every box out there at once? If its not a specific-to-that-machine signature, how do you keep the attacker for signing software too?) for the purpose of protecting the servers from software you don't want to run.

        In the other corner of the court, it appears that we have big business interests who want to have all software signed, who would charge hundreds to sign software for other authors (verisign, et al will certainly be in the business), MPAA and RIAA will be wanting to make sure signed software obeys their rules (and will probably charge for this too), all to make sure your computers are protected from software they don't want you to run.

        Things like this IBM article help make the first scenario a reality, and I'm grateful for it. Now, who wants to be the first to be sued by Microsoft for some TCPA submarine patent that nobody knows about?
    • Encrypted communication and hard drive encryption are already possible. Including "trusted computing" in Linux allows Linux users to run closed-source binaries (either applications or libraries) that interact with encrypted files obtained from third parties, where such binaries use this "trusted" nonsense to restrict the use of these files in some way. This allows Linux users to trade their freedom in return for continued access to digital media without having to stop using Linux at the same time. While we
    • The only benefits I can see is increased security for encrypted communication or hard drive encryption. I am really trying to think hard of any other beneficial applications but can't come up with anything

      Games.

      There are many things, for instance, in MMORPGs and FPSs that have to be done now on the server that could be done better (in terms of performance and in terms of providing a better game experience) on the client, but can't be done there because it would allow cheating.

  • by DigiShaman (671371) on Wednesday February 02, 2005 @01:46PM (#11552223) Homepage
    to hang myself.

    Instruction: How to restrict your Linux box from yourself.
  • by Xpilot (117961) on Wednesday February 02, 2005 @01:46PM (#11552229) Homepage
    Linus himself said DRM is ok [theregister.co.uk], as long as it's used in the interests of the user. This is a good thing, think about it; EvilCorp(tm) wants to use DRM to cripple computers, but the PR guy will say "it's for the user". Of course their intent is nothing of the sort, but the Linux folks are the only ones who will actually implement something that *is* in the interest of the user. Then EvilCorp won't be able to lobby making Linux illegal, since Linux also uses DRM which does what EvilCorp claims it's doing "for the users". Well, hopefully.

    • You touched on something there that I want to bring out further.

      Linux can show what user-centric trusted computing can/should do. Microfoft et. al. will be showing what Big Business trusted computing wants/can do.

      Eventually there will be those that will ask why it has to work against them so much when running Billy Bob's OS, and then they'll realize that their PC is not their PC, but the industry's PC.
    • Then EvilCorp won't be able to lobby making Linux illegal

      Sillier things have happened.
    • Linus himself said DRM is ok, as long as it's used in the interests of the user.

      Linus is not a lawyer. More importantly, he's not even a free software or open source evangelist. Unlike RMS or ESR, he doesn't even hang out with lawyers or devote serious thought to legal matters.

      Since DRM is a combined legal-technical area, it falls outside Linus's expertise, and his opinion carries little weight. (From a practical standpoint, TCPA is incompatible with the Linux philosophy of open-source modifications)
      • Linus is not a lawyer. ... Unlike RMS or ESR, he doesn't even hang out with lawyers or devote serious thought to legal matters.

        I knew there was a reason why I liked the guy!

      • not entirely so (Score:4, Insightful)

        by hany (3601) on Wednesday February 02, 2005 @02:45PM (#11553045) Homepage

        From a practical standpoint, TCPA is incompatible with the Linux philosophy of open-source modifications

        IMO this is not exactly correct - is it against Linux philosophy of open-source modifications to secure my Linux box so nobody except me can make modifications to it?

        TCPA used in such way (i.e. in interest of user, not supplier, not government, ...) is quite in line with Linux philosophy of "you're in control" :) .

        But, as with all weapons, it has two edges. So, beware! :)

      • Being even less well qualified than Linus Torvalds to contribute to this discussion, I nevertheless venture to suggest that the "Linux philosophy" entails, almost by definition, the endorsement of a particular configuration of source files by Torvalds and others.

        As such, open source is not at odds with the ostensible goals of TCPA, any more than it is with his ownership of the Linux trademark.
  • Better yet, lead 'em. It would be ridiculously funny if Trusted $FREENIX were released before Trusted Windows or Trusted MacOS.
  • Lacking One Thing (Score:5, Interesting)

    by SpottedKuh (855161) on Wednesday February 02, 2005 @01:51PM (#11552286)
    Though the specifications detailed in the article are definately a Good Thing, they lack (at least as far as I could tell) any way of preventing unauthorized physical access to the chip.

    Physical access to machines is always a big issue in security, and one that is often overlooked. And while it's probably not a big deal for your home machine, consider large companies whose machines could conceivably be targetting for a physical attack to recover the keys directly from the TPM (Trusted Platform Module).

    Stajano's "Ubiquitous Computing" book has excellent coverage of the rationale, issues, and complexity of attempting to prevent physical access to chips and devices which store sensitive information. It's an easy read, and well worth it: http://www-lce.eng.cam.ac.uk/~fms27/secubicomp/ind ex.html [cam.ac.uk]
    • The TPM is supposedly very tamper-resistant, not just a piece of solid-state memory holding the keys. This should make physical attacks very expensive and labor-intensive.
    • Though the specifications detailed in the article are definately a Good Thing, they lack (at least as far as I could tell) any way of preventing unauthorized physical access to the chip.

      Slashdot ran a story a few weeks ago about a new set of chips with built-in TPM [slashdot.org] features. These chips have the Trusted Computing capabilities built into the CPU. It will make it much more difficult to attack them physically since the TPM is not a separate module but is integrated into the whole system. Probably this is how
    • Physically securing very valuable things is easy.

      Put the computer in a secure vault, and put guys with guns at the door and perimeter.
  • Hardware Security (Score:3, Interesting)

    by quadra23 (786171) on Wednesday February 02, 2005 @01:52PM (#11552296) Journal
    This is indeed good news! Security that is solely-based on software is far easier to compromise than hardware-based (provided that the hardware can't be tampered with by malicious software). Far better to have the security co-ordinated between both. I'd be interested to see how widely accepted this open specification will be.
  • by GbrDead (702506) on Wednesday February 02, 2005 @01:54PM (#11552322) Homepage
    Treacherous Gentoo?
  • by Hobbex (41473) on Wednesday February 02, 2005 @01:55PM (#11552335)
    It has been said a million times, yet apparently it bairs repeating. The "security" aspects of TCPA are redundant, unnecessary, and at best useful but could be made a lot better if the chip was designed for security rather than DRM. The whole system really exists only for one purpose: as a trojan horse to implement something called "remote attestation" in PCs.

    What is remote attestation? Basically, it means that the TCPA chip, which you cannot control, can read what operating system you have loaded, and send a reponse proving that you are running a certain operating system to others on the Internet. The purpose of this, of course, is so that the operating system can be verified not to have it's DRM functions cracked, so that the RIAA and MPAA can send you data and make sure that they get to decide what you do with it.

    The people pushing TCPA will claim that it is not for DRM, but that is a smokescreen and only a smokescreen. While TCPA does not do DRM itself, it is the enabling component that is needed so that software can implement DRM without being circumventable.

    What does this mean for a "trusted Linux"? It means that while it is completely possible to have a Linux system working with TCPA, once you change anything in the system, the TCPA chip will notice you are running a modified system, and nolonger let your data. So while the software may nominally remain under the GPL, it will be the death of the free software model, because users who wish to tinker with their systems will be locked off the Internet (Cisco is already talking about systems to have ISPs demand remote attestation when TCPA is in place). TCPA and Linux can be combined in theory, but only in theory - in reality they cannot ever coexist.

    Those who do not believe me (or those who are inclined to believe the MS shills who will respond saying that I am wrong), should read EFFs analysis of TCPA [eff.org] where they give a simple way that the chip could be changed to allow all uses except remote attestation intended to force people to use certain operating systems and enforce DRM over the user. It has been completely ignored by the manufacturers of TCPA.
    • I can already think of a workaround.

      Use an x86 emulator and two copies of Linux, one that uses TCPA and one that doesn't. Run the x86 emulator on the unrestricted Linux copy, and use it to run the TCPA copy under emulation. The x86 emulator would just have some security 'flaws' when it comes to storing keys or it might do stuff like forgetting to apply the encryption. It would still report as a valid DRM chip, and would be able to provide keys and authentication on demand.
      • I can already think of a workaround. Use an x86 emulator and two copies of Linux...

        Emulation won't work to defeat DRM. The TPM generates an on-chip secret key at manufacture time which never leaves the chip. The manufacturer issues a certificate on the corresponding public key which certifies that this is a legit TPM key. The TPM then is able to issue attestations about the software configuration based on what is called a "secure boot" sequence, in which the TPM creates a fingerprint of the software confi
    • ....

      The purpose of this, of course, is so that the operating system can be verified not to have it's DRM functions cracked, so that the RIAA and MPAA can send you data and make sure that they get to decide what you do with it. ....

      (Cisco is already talking about systems to have ISPs demand remote attestation when TCPA is in place). TCPA and Linux can be combined in theory, but only in theory - in reality they cannot ever coexist.


      Lets be clear about what it is that you are saying here, even Linux OS's th
    • Those who do not believe me (or those who are inclined to believe the MS shills who will respond saying that I am wrong), should read EFFs analysis of TCPA where they give a simple way that the chip could be changed to allow all uses except remote attestation intended to force people to use certain operating systems and enforce DRM over the user.

      And see this rebuttal to the EFF report [invisiblog.com].

      Further see this blog entry by the same author on good uses of Trusted Computing [invisiblog.com] all of which rely on the supposedly evil
      • The truth is that TC along with Remote Attestation is a new feature set for your computer which allows new ways for people to cooperate online. Some people oppose this because they don't believe that others should be allowed to cooperate in ways they don't approve of. They don't want you to be able to credibly commit to obeying certain rules in processing data. But they have no right to interfere in your private decision making processes.

        No...thats not it. I don't "oppose people having choice" or some cra

    • You should read the TCPA FAQ [againsttcpa.com] if you have not already. It explains why this is a bad thing.
  • by Minna Kirai (624281) on Wednesday February 02, 2005 @02:00PM (#11552393)
    It's very simple:
    1. Linux is distributed under the GPL (and other licenses).
    2. To comply with the GPL, end-users must be able to acquire the source code (which means everything they need to reproduce the binary executble, with or without modifications).
    3. If you don't comply with the GPL, you are committing copyright infringement, a federal offense.

    But from the other direction:
    4. Trusted computing means that all binaries are signed with a secret key.
    5. The Trusted CPU will not execute binaries that weren't signed with that key.
    6. In this way, it is impossible for end-users to create modified binaries to add/remove features from the software.

    The GPL is too much in conflict with Trusted Computing to ever allow them to work correctly together. To obey the GPL, end-users must have access to everything needed to rebuild working binaries- which includes the secret key. But for Trusted Computing to work, it must be impossible for end-users to get the key- otherwise there's no point.

    So, Linux or Trusted Computing. Choose one, because you can't have both.
    • easy, have linux tell the chip what key to use (user set) and then when you compile something, you sign it with *your* key, and it runs, then if joe blow hacks in and compiles a root kit, it won't run, cause it's not signed by you. Why should ms be the only one who can sign things?
    • Trusted software means the binary has been signed and the running binary has not been tampered with.

      This does not mean you cannot have the source or you cannot modify it. Only, the modified version will not be trusted. That is what trusted code is all about.

      So there is no conflict between GPL and trusted code. There is a conflict between modified code and trust, but that is the purpose of the entire concept.

      Of course you can generate your own secret key, publish the public key, sign your binaries, and
      • Then, anyone who trusts you, can trust your binaries just as they trust binaries signed by Microsoft.

        The "anyone" who matters in this case is hardware vendors like Intel, AMD, Apple, NVidia, and Hauppage. Obviously, Fortune 500 corps aren't going to trust a hobbyist like me.

        You paint a false picture by implying that it's end-users who need to trust me... that I can put my modified software up on the web, convince them to trust me (by exchanging money or contracts or whatever means), and then they will
    • What if Gentoo distributes the key as part of their distro?
    • Frankly, if the GPL doesn't allow me to sign a binary, then the GPL is broken. However, I rather doubt that's the case -- much open-source software is already signed by the creator / distributor, so you know that the binary you got was actually made by him. Because hardware DRM relies on exactly the same idea -- guaranteeing that you got your binaries from a particular source -- from a practical standpoint, there's no difference between the two scenarios.
      • Frankly, if the GPL doesn't allow me to sign a binary, then the GPL is broken

        Conversely, if the GPL allows me to sign binaries on a system where unsigned binaries don't run, then the GPL is broken, because it's got a loophole allowing GPL'd works to be effectively seized by corporate programmers with no compensation.

        much open-source software is already signed by the creator / distributor, so you know that the binary you got was actually made by him.

        Sure, you can sign binaries. But if you give that
        • Sure, you can sign binaries. But if you give that binary to someone, and he later demands the source code from you, you'd better include the private key along with it.

          No. My private key is not required to create a trusted executable. Presumably the owner of the machine, if he chooses to compile the program from source, will be able to sign the resulting binary with his own private key. After all, surely the owner of the machine will be able to trust the output of his own (trusted) compiler. If he choose

          • Presumably the owner of the machine, if he chooses to compile the program from source, will be able to sign the resulting binary with his own private key.

            Completely wrong. The owners of machines don't get the keys needed to sign things for their own hardware. Only the builders of the hardware have those keys, and they are contractually obligated by agreements to the MPAA and RIAA not to divulge those keys to anyone (except employees in the course of their work).

            If the owners of the hardware were going
            • by finkployd (12902) on Wednesday February 02, 2005 @05:19PM (#11554946) Homepage
              Completely wrong. The owners of machines don't get the keys needed to sign things for their own hardware. Only the builders of the hardware have those keys, and they are contractually obligated by agreements to the MPAA and RIAA not to divulge those keys to anyone (except employees in the course of their work).

              Wow, you just don't have a single clue about any of this do you? You can pop whatever keys you want into the hardware. If you want to create a system where only binaries signed by you can run, go for it. If you only want to run binaries signed by debian, redhat, or joe blow down the street, you can do that too. You can also turn off this checking and allow anything to run.

              The scary part of this is the remote attestation piece. THIS is what the riaa and mpaa want. It basically allows streaming media servers and media files to only be opened by programs signed (and verified by the hardware) by those they trust, like microsoft. A scary vision of this is that windows file sharing could disallow samba clients to connect to it even if the open the protocol, because samba was not signed by Microsoft.

              If the owners of the hardware were going to be the ones having the keys needed to run on that hardware, then I wouldn't have any problem with it.

              You are not going to get Microsoft's signing key and be able to sign your binaries as them, but you will certainly get their public key to verify their binaries and put that in your hardware. You can also generate your own key to sign with and put that public key in the hardware too.

              Do you honestly believe that anyone, anywhere would ever go for a system where all software running on Windows has to be signed by microsoft? They couldn't even do that with signing device drivers and such (although they tried, all it does is warn you). You think microsoft is going to stop selling visual studio and all their programming tools because nobody but them can create and sign binaries? Now take this a step further, do you think overseas PC makers are going to sell PCs that can only run windows? Even US companies would never do that.

              Good God man, actually take some time and learn about this stuff before you spout uninformed drivel everywhere. There are some real legit complaints about TCPA, but you seem to not understand the most basic aspects of it.

              Finkployd

    • From your post, I belive you don't understand what trusted computing is, or what the TCG specifications imply. Trusted Computing is based in the assumption that there is a Core Root of Trust. This CRT is trusted, and should be verifiable (not the current state, but maybe in the future we will have an open source BIOS). This CRT will measure the next entity (bootlader, whatever) and will hash the reult into a repository (the Trusted Platform Module). Then the bootloader will do the same with the OS, and so
      • dont see any trouble with this and linux.

        The trouble is not between TCG and Linux, but TCG and GPL (and Linux is one of many GPL programs).

        Later, a program wil want to attest the software you are running,

        The whole idea of that "attesting to the software you are running" is restricting who can modified the software. In particular, TCG wants to forbid amateurs and especially end-users from modifying the software. That is completely against the open-development spirit of Linux and all GPL projects.
    • The Trusted CPU will not execute binaries that weren't signed with that key.

      wrong. They'll still execute, they just won't be trusted.

      Trust itself is a feature at the OS level! Do you think the BIOS knows whether some data you read off the disk is an application? Does the CPU know the difference between a current application and the one that just executed just because of a context switch (which happens all the time just during timesharing between all the different applications you already have running)?
      • They'll still execute, they just won't be trusted.

        Fine. They'll "execute", but since they're not trusted, not all of the features will work. Meaning that not all of the binary is "executing"... so I guess I was right after all.

        but I can pretty much assure you that Linux wouldn't bother with this and if it did that someone would fork it

        How would they fork it? You can't fork if it won't run, and it won't run if you don't have the key. The only way to fork would be if you could use the GPL to coerce
    • by SiliconEntity (448450) * on Wednesday February 02, 2005 @03:07PM (#11553325)
      4. Trusted computing means that all binaries are signed with a secret key.
      5. The Trusted CPU will not execute binaries that weren't signed with that key.
      6. In this way, it is impossible for end-users to create modified binaries to add/remove features from the software.


      This is total garbage. Where did you get this nonsense?

      TC does not require binaries to be signed with a key. TC will not refuse to execute unsigned binaries. And end users can do whatever they want.

      Now for the facts, in case you're interested. TC implements a secure boot. This allows the TPM chip to store a hash or fingerprint of your software configuration: the BIOS, the boot loader, the OS, and if desired, the applications that are running.

      The TPM and OS can basically do two things with this information. They can implement "sealed storage" which means that a program can lock its encrypted data to the current software configuration. This means that if you boot a different OS, or if the program gets modified (either of which might happen due to virus infection), the fingerprint changes and the data will no longer be available. Likewise if another program tries to access the first program's sealed data, it won't be able to get access to it.

      The second feature of the TPM is "remote attestation". This allows a program to request the TPM to issue a cryptographically signed statement about what the current software fingerprint is that is running. This signed attestation cannot be forged because the TPM generated an on-chip key at manufacture time, and the manufacturer issued a certificate on that key which the chip can use to prove to anyone that it is a legitimate TPM.

      Remote attestation allows network applications to determine what software configuration the peers are running, and, if they choose, to disallow participation by software which is not running a specified set of configurations. This is the closest you will come to the idea that users can't change their own software. If they want to run a program which relies on this feature, and that program doesn't accommodate the changes the user wants to make, they would be shut out. But in practice, open source programs will probably be flexible in this regard as they will want to have as many people as possible participate. There are a number of technical measures which can be adopted to allow for considerable user flexibility.

      But certainly none of this would violate the GPL or any other legal prohibitions. Everything is entirely voluntary, and Trusted Computing does not prevent you from doing what you want with your computer. You don't even have to turn it on if you don't want to!
    • As usual with slashdot, you hold strong opinions regarding tcpa with absolutely no idea what it is.

      Tcpa lets you tell your machine to only run binaries signed by Microsoft. You can also tell it to only run binaries signed by IBM. Or you can tell it to only run binaries signed by debian. Or yourself. Or any combination. You tell it what you want it to do in this regard.

      The only valid argument against it is the remote atestation issue, which (using digital signatures) can attest the identity of a client ove
  • If, for example, this provided a way to make sure that a computer on the internet is really who it claims to be, that would be good.

    But trusted windows, at least, is going to be about remote deletion/disabling of data.
  • Software DRM (Score:3, Interesting)

    by Yartrebo (690383) on Wednesday February 02, 2005 @02:03PM (#11552439)
    Since the source is available for Linux, what would stop someone from sandboxing 'trusted' software by having the OS validate code before it's executed (slow, though a bit faster than emulation and without all the bugs), and then implenting the DRM hardware (or BIOS) instructions in software in a way that stores the keys (or plaintext information, if that is not doable) and allows access to any software to get the info.

    The software DRM implementation would be 100% transparent to the application and noone would be the wiser.

    It should also be workable with a x86 emulator running a closed source 'trusted' application along with its closed source OS, with the emulator doing the DRM instructions a little differently than normal.
    • Another fun thing would be to pipe all the TCPA hardware requests over the network to some central machine. Tying software to a machine this way has never been more futile :)
    • Re:Software DRM (Score:4, Informative)

      by SiliconEntity (448450) * on Wednesday February 02, 2005 @02:48PM (#11553076)
      Since the source is available for Linux, what would stop someone from sandboxing 'trusted' software by having the OS validate code before it's executed (slow, though a bit faster than emulation and without all the bugs), and then implenting the DRM hardware (or BIOS) instructions in software in a way that stores the keys (or plaintext information, if that is not doable) and allows access to any software to get the info.

      This is one of the most commonly asked questions about the TPM. The answer is that the TPM implements what is called a "secure boot" sequence.

      The first thing that happens in a TPM enabled computer is that the BIOS, on startup time, sends a hash of itself to the TPM. Then, when the BIOS goes to load the OS, it sends a hash of the boot loader (grub, in the case of Linux) to the TPM. The boot loader will be modified (see the Trusted Grub [prosec.rub.de] project) to take a hash of the OS kernel and send that to the TPM. And the OS itself will be modified (a la tcgLinux [ibm.com] to take a hash of the various OS components, startup scripts, and programs as the computer boots.

      The net result is that the TPM has a record of what OS was booted and what the software configuration is that is running. This allows it to distinguish between a "real" boot and an emulated one, because in the latter case it sees a hash of the emulator being loaded.

      Software which runs in un-emulated mode and uses the TPM features can distinguish that case from when it is running emulated. If it locked some data using the TPM in the first mode, it won't be accessible in the second mode.

      Once remote attestation is possible, networked applications will be able to report their software configuration to each other. This will be unforgeable because the TPM will sign an attestation of the software configuration, and the TPM itself will have a certificate from the manufacturer attesting that it is a legit TPM. Your emulator will not have a certified TPM key (those stay on the chip) and so it won't be able to come up with a credible forged attestation. Programs running on emulators won't be able to take part in network security applications that use these features.
      • Your emulator will not have a certified TPM key (those stay on the chip)

        That is why CSS is still not broken for DVD, heh?

        Seriously, pretty much any hardware can be emulated in software, albeit maybe at a slower speed. And when people want it bad enough they will find a way to ease finding valid keys, or either read data out of ICs or simply TCPA documents..
  • TCPA - TCG (Score:4, Informative)

    by SiliconEntity (448450) * on Wednesday February 02, 2005 @02:06PM (#11552469)
    It hasn't been called the Trusted Computing Platform Alliance, TCPA, for a couple of years now. It's now the Trusted Computing Group, TCG. Same technology, just a new name.
  • TPM emulator (Score:2, Informative)

    by nomellames (818773)
    If you want to test the IBM API, but you don't have a Trusted Platform Module, you can try using the kernel module emulator at http://tpm-emulator.berlios.de/index.html [berlios.de]
  • If Gentoo wants to add a TCPA compatibility module, have fun. But absolutely do NOT call it "Trusted Gentoo" when its actual meaning is "Gentoo that doesn't trust YOU".

    Gentoo's public communications guy needs to read some George Lakoff [gracecathedral.org]. It's a wonderful life, folks. Every time you use their words, a devil gets his pitchfork.
  • by SiliconEntity (448450) * on Wednesday February 02, 2005 @02:20PM (#11552649)
    Trusted Computing Group (TCG) technology makes sense in the context of Linux. Microsoft refuses to implement it. They had their own conception, which was Palladium, then NGSCB, then was dropped. So if TCG is going to go forward at all, it has to be with Linux.

    It's kind of ironic, because Ross Anderson's lying Anti-TCPA [cam.ac.uk] FAQ tries to claim that TC exists to kill Linux. And yet it is turning out that Linux is the salvation of Trusted Computing.

    There are a number of research projects in TC on Linux, including TPM Device Driver [ibm.com], Trusted GRUB and Secure GUI [prosec.rub.de], tcgLinux [ibm.com], TCPA Open Source Platforms [crazylinux.net], Enforcer [sourceforge.net], and more. All Linux based.

    Don't believe the FUD about TC. When implemented in Linux using Open Source software, TC gives you new options for securing and expanding the capabilities of your computer.
    • by praedor (218403) on Wednesday February 02, 2005 @03:18PM (#11553508) Homepage

      Hmmm. And yet I don't seem to need any form of TCPA/TCG or DRM. In all the years I've run linux full-time, I have never ever had naughty code or naughty hackers get in. I can't say that about any of the windoze users I know. Beyond that, I certainly don't need any system that can be used as a DRM system.


      Nope. Uh-uh. Not on my box. I'll copy my files and CDs as I feel the need and will not have anyone but me control when and how I go on to use such copies. This all looks like what it is, an attempt by corporations to gain control of the most important and useful aspects of your PERSONAL and private property computer. Screw TCPA/TCG (and DRM). Paint it all up with lipstick and rouge all you want but in the end it is about restricting what people are allowed to do with their own computers. Any benefits that come to the individual computer owner are accidental and peripheral to the actual designed and intended purpose.

  • by Anonymous Coward
    In Soviet Union, your GPL'd software doesn't trust YOU!

    Hmmm. This puts the whole concept of so-called "Trusted Computing" into a realistic, and sad, perspective.
  • Security vs. those that wish to bypass the security for any reason.
    Its an ethernal "arms race".
    True, the TCG chip will rise the bar for running "unauthorized" software, but it might also bring its own downfall. Imagine the chip is implemented and works well for a year or two - i.e. the only ways to defeat the chip are very inefficient (and probably require doing stuff with the hardware that only few of us geeks would have to skills and guts to do)
    And then, suddenly, somebody has a great idea how to decie
  • by latroM (652152) on Wednesday February 02, 2005 @03:47PM (#11553910) Homepage Journal
    RMS has written a nice article about it: see http://www.gnu.org/philosophy/can-you-trust.html [gnu.org]
  • NO SIGNED CODE (Score:3, Interesting)

    by SiliconEntity (448450) * on Wednesday February 02, 2005 @04:47PM (#11554651)
    I want to try to correct one of the most common and universal misconceptions about Trusted Computing: that it will only allow signed code to run. This is causing enormous confusion here, with people arguing about how that works with the GPL, who would get to sign the code, would users get to sign their own code, etc., etc.

    The truth is that the TCG spec says nothing about signed code. There are no limitations in TCG that keep you from running unsigned code. There is no distinction between "secure" and "insecure" code. You can run anything you like. Signing is a complete red herring in this discussion.

    I am not trying to gloss over problems or paint a false picture. The truth is that TCG does have features whose effects are somewhat like what people are worried about with signed code. The result is that TCG could be helpful for DRM, and it might make it impossible to download music from an online store without running a special application, for example. But this would not be because "you can only run signed code". Rather, it is the server that decides whether it wants to talk to you, not your computer deciding what you can and cannot run.

    What's the difference? Well, if your main concern is being able to run hacked clients that will allow you to violate your user agreements, then there is no difference. You would be right to oppose Trusted Computing. It will make it harder to lie and pretend to honor an agreement, then break your word and go back on your promise.

    But if your main concern is about the GPL and what software you run, there is a big difference. There are no limits on the software you can run. You can hack your Linux kernel to do whatever you want. You can disable "secure" features in the software you run. These privileges don't go away when there is a TPM chip. That should put to rest the concerns about the GPL and hopefully end the discussion about who signs what code.

    If you're wondering how these two points of view can be compatible, you need to learn more about the TCG spec and the TPM chip. In a nutshell, the TPM chip, with the cooperation of the BIOS and OS software, takes a hash or fingerprint of the software configuration as the computer boots. It can then report this fingerprint to remote servers, if client software requests it. These reports are signed with an on-chip TPM key that never leaves the chip; and this chip has a certificate from the computer manufacturer, so no emulator can fake these reports (called remote attestations).

    That's how it works. It's a lot more complicated than refusing to run unsigned code. What it comes down to is that software can report its configuration in a believable and, yes, trustable way. That's the real reason this is called Trusted Computing, not the lie made up by Ross Anderson. It's Trusted because you can Trust the reports from a remote system about what software it is running, and therefore what it will do.

This screen intentionally left blank.

Working...