The Future of Trusted Linux Computing 158
ttttt writes "MadPenguin.org tackles the idea of Trusted Computing in its latest column. According to author Matt Hartley, the idea of TC is quite reasonable; offering a locked-down environment offers several advantages to system administrators with possibly troublesome users. 'With the absence of proprietary code in the mix users will find themselves more inclined to trust their own administrators to make the best choices ... And so long as any controlled environment is left with checks and balances [like] the option for withdrawal should a school or business wish to opt out, then more power to those who want a closed off TC in an open source world." LWN.net has an older but slightly more balanced look at the TC approach.
If the owner controlls all the keys, its fine (Score:5, Informative)
Trusted Computing is only bad if the owner of the hardware does not have control over the software on the machine, the hardware keys etc.
Re: (Score:2)
I would say that the owner should be allowed to do anything he likes provided that he cannot fake the keychain.
Example in a pre-baked trusted environment when accessing resource A I sign up with a chain which shows that it is done by me, through software X on kernel Y and hardware Z.
I should not be allowed to fake kernel Y, but there should be nothing to prevent me from installing an alternative signed kernel Y1. Similarly, I should be able to run Y on Z1 or X1 on Y as long as the chain is correctly r
The user should be able to swap kernels... (Score:2)
Re:Excuse me but how do I get it signed? (Score:4, Interesting)
Excuse me but how exactly do I get my linux kernel i compiled myself signed?
SelfSign it. It is not the fact that it signed, it is who sign it which matters. From there on an access request goes down the chain with everyone signing it. The access control for A may like your selfsigned kernel. Similarly, it may not and it will invalidate everything down from it as untrusted. It is A-s "owner" choice.
And if you are talking about DRM for media, forget it, it is not here to stay.
You have mistaken me for someone who gives a fuck about signed MP3s. Now a document sitting on a corporate CMS encrypted individually on every release and with an associtated cert chain for each revision is something I do care about. A lot. A lost laptop in this case no longer means stolen data. The entire problem of document access control also more or less goes away. Same for revision and change control. While it is a hassle it solves quite a few real world problems.
Re: (Score:2)
However, the underlying purpose of Palladium, which was misleadingly renamed to "Trusted Computing", is DRM. It's designed very specifically to prevent access to data files without signed software, even signed software authenticated against the local hardware. This is aimed squarely at controlling video, audio, and registered software access suc
Re: (Score:2)
Re: (Score:2)
It's not always bad even then. It depends who the owner of the machine is. If the owner is someone who is easy to socially engineer (90% of users, I'm sure -- Come look at the dancing bears!), then a behemoth corporation is in effect the system administrator for all those people, and locking down machines by allowing only signed applications can make sense. Most people
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Informative)
The only problem is that the whole point of Trusted Computing is to keep the keys used to attest to the state of the PCR completely unavailable to the user. Read the spec: https://www.trustedcomputinggroup.org/specs/TPM/ [trustedcom...ggroup.org]
Re: (Score:2)
There is a lot in TC that is good. You could potentially eliminate viruses, for example, with a hardware-backed chain of trust. The issue is that the chain of trust should lead back to the computer owner - not the computer manufacturer.
Re: (Score:2)
It may be useful against tools that corrupt virus checkers, but viruses and vulnerabilities come out so fast in basic software and protocols that this is only of limited usefulness. And that "chain of trust", as Palladium is designed, leads right back to Microsoft, who can be expected to have already handed over keys to the NSA or other federal authoritie
Re: (Score:2)
There is nothing wrong with hardware assisted security if the owner controls all the keys
There is nothing wrong with hardware assisted security if the owner is allowed to know all his keys.
Knowing his keys then provides full control of them.
The specification explicitly forbids the owner to know his own ma
Re: (Score:2)
The current "trusted" computing solutions would restrict the administrator too, because the system trusts some key-issuing authority instead of it's legitimate owner.
This isn't correct.
The only use of the third party-issued certificate is for remote attestation, where the computer proves that it has a trusted computing module. You can use that capability to build highly-secure remote control, but that's entirely a function of what application software you layer on top, it's not inherent in TC.
Re: (Score:2)
I'd advocate that anybody who buys a computer should be given:
1. A dump of all the keys embedded in their system.
2. A dump of any private keys associated with any public keys embedded in their system.
What they do with them is up to the owner. If they want to download additional keys (such as root CA certificates) they should be we
Re: (Score:2)
Remote attestation should be able to be defeated by any system owner.
It can be. Simply don't install application software that sends attestation messages. If what you think you want is to be able to falsify an attestation, well, that would make the whole thing completely useless. If it were possible to falsify an attestation message, then sysadmins wouldn't be able to remotely verify the software their systems are running, which is the goal of the TC committee members like IBM and Intel (who don't care about DRM but do care about security).
I'd advocate that anybody who buys a computer should be given:
1. A dump of all the keys embedded in their system.
2. A dump of any private keys associated with any public keys embedded in their system.
That would make the TPM almo
Re: (Score:2)
Note that I said any SYSTEM OWNER - not anybody who happens to log into the computer.
Sysadmins can rest assured their systems are running just fine. They'd be the only ones able to falsify attestation messages on their own PCs.
It's much better to b
Re: (Score:2)
Sysadmins can rest assured their systems are running just fine. They'd be the only ones able to falsify attestation messages on their own PCs.
Assuming no one has taken control of their machine -- which is the whole point of attestation; having a way to verify that no one has tampered with your box, and to be able to do so remotely.
I'm not convinced of this. Key escrow and management is generally considered an important part of any encryption solution.
This does not contradict what I said. No competent key management or key escrow system ever exposes keys. They're always protected by another HSM's private key, which never exists outside the device in any form. If it's necessary to recover a key, then some key escrow systems will actually decrypt and reveal it.
Re: (Score:2)
How is a PAPER copy of the machine's private key in the box the machine came in going to help a hacker who has taken control of the machine defeat attestation? The sysadmin would lock it up in a safe - without the piece of paper the mechanism could not be defeated. The goal is to allow a computer owner to defeat the m
Re: (Score:2)
How is a PAPER copy of the machine's private key in the box the machine came in going to help a hacker who has taken control of the machine defeat attestation?
How did the key get on the paper? How did the paper get to you? How do you know that it wasn't compromised long before the machine was delivered?
The goal is to allow a computer owner to defeat the mechanism if they feel the need, while still allowing the full utility of attestation TO THE MACHINE OWNER.
If you think that's possible, you're insufficiently paranoid.
In any case, there's no need to provide keys paper to eliminate any concern that others could use the key. As I've already pointed out, you can simply instruct your TPM to generate a new attestation key. The TPM will only give you the public key, but that's fine because you don't need the privat
Re: (Score:2)
Ie, the ability for your stock brokerage to turn away customers using anything other than a few flavors of windows with their preferred web browser. For remote attestation to be of any use companies are forced to pick particular software vendors that they want to prefer. Within an organization th
Re: (Score:2)
Re: (Score:2)
Secondly, regardless of whether or not microsoft would do such a thing, even if
Re: (Score:2)
Sure they can. Just ask for attestation that the Super-DRM-windows OS is running. Then connect to the port used by super-DMA-windows and ask for an attestation that IE owns the outbound connection that is hitting your webserver. Super-DRM-windows won't let any other softwar
Re: (Score:2)
Nobody wants to stop technology. We just want to make sure it is properly used. The simplest solution is to require that every computer include a printed copy of any and all keys embedded in it, and ideally any private keys associated with
Re: (Score:2)
This one can't happen. No one is policing *your* computer, the worst that could theoretically happen is someone *else* will deny service to you if you're not running the right software. If you have the word file, no one can stop you from reading it.
Re: (Score:2)
Have you read about Palladium? Software will be able to tell the OS to keep a file in "protected storage" where other programs can't read it. Sure, you have the file, but it is encrypted. Most of the OS partition will also be encrypted. The decryption key wil
Huh? (Score:2, Insightful)
Proof of this statement?
Trusted Computing is by definition closed. (Score:5, Insightful)
Trusted Computing requires trusting the CPU manufacturer in the first place. And in this world, where the telcos have disclosed our conversations to the govt without us finding out but several years later, can we really trust that the government hasn't pressured the CPU makers to add a backdoor here and there?
Trusted Computing is practically closed, and incompatible with the spirit of Open Source/Free Software. Ergo, Trusted Computing cannot be trusted. Sorry.
Re: (Score:2)
Re: (Score:2)
In principle, there is nothing wrong with TC, so long as the owner of the PC has the private keys. But this scenario is
Re: (Score:2)
Trusted Computing requires trusting the CPU manufacturer in the first place.
Actually, TC has almost nothing to do with the CPU. The TC Trusted Platform Module (TPM) is a separate device that is just another peripheral. Most implementations sit on the USB bus.
Trusted Computing is practically closed, and incompatible with the spirit of Open Source/Free Software. Ergo, Trusted Computing cannot be trusted. Sorry.
Not true. TC is an open specification, and can be used to implement all sorts of different security policies. The TPM is just a peripheral that provides three services:
Re: (Score:2)
This is almost certainly discovered. Let's suppose we can't choose for the competitor, because they're in a big conspiracy.
Making CPUs isn't that hard. It's making them the fastest and the cheapest that's hard. There are open source processor designs available, like the LEON core [wikipedia.org]. There are lots of producers of FPGAs [wikipedia.org]
please try to hold back the propoganda (Score:3, Insightful)
Sorry, but I think that's putting your words into everyone else's mouths. Or fingertips, or whatever. The vast majority not only don't have this opinion about open vs proprietary code affecting how much they trust the choices their admins make, they also wouldn't have a freakin' clue as to what you're going on about in that sentence. The vast majority don't know what open-source is, how it differs from proprietary source, they don't know any reason why they'd care either way, and they'd probably give you a pretty funny look for attributing this philosophy to them.
I like Linux and open-source, and have an appreciation for it. But I don't trust my admin at work more when he talks about Linux than when he's talking about Solaris. It's his job to make the best choices of any and all products available, and I trust him to choose whichever is most appropriate for our company, even if he feels that happens to be a proprietary product. It's not my place to impose on him to only ever choose open-source, and there's cases in our work where open-source offerings are less ideal.
Deception (Score:4, Insightful)
In short, this article aims to lure the unwary into gullible acceptance of TC with an illusion of completely deceitfully presented and impractical (no one except the mega-corps will ever get the access to the main TPM keys) applications.
definition of trust - which do you prefer? (Score:2)
the first definition basically boils down to "we don't trust users" - and is the version of trusted computing that you're describing.
the second definition basically says "we want users to be able to trust their computers and be able to do what they want without worrying".
it should be fairly obvious which definition that a linux-based, free-software-backed distribution will go for, especially with the backing and qu
Re: (Score:2)
Except of course that it is a red herring. The first definition is the only one on which all of the proposed mass-produced commercial TPM designs are based. Most of these designs also include select, castrated elements of the second definition, as a bone to thr
Re: (Score:2)
Also I forgot to add:
You gotta be kidding, right?
Police are the last people on earth who want users of personal computers to be able to detect intrusions. They are, specialy the newfangled "Father ... err ... Homeland Security" types, one the groups most eager to use trojans, keyloggers and the like in their pursuit of "criminals".
Re: (Score:2)
Re: (Score:2)
I am not saying that they will succeed. I am telling you what the plan, and the purpose (as de
Re: (Score:2)
Sure, it will happen but it will (unless the TPM makers are total dolts) involve electron microscopes or some other wacky hardware which very, very few people have. We are talking about a hardware hack with a high level of difficulty, which could crimp our style for some while a
How is this any different... (Score:2)
Two step ISP's (Score:2)
Re: (Score:2)
Not so useful, exploitable, and bad people like it (Score:3, Interesting)
If you have applications that you need to secure, in order to prevent, for example, misuse of tax filings or medical records, you can do it using Web applications, or other thin client technologies combined with physical security of client computers. There is nothing that can guarantee stopping someone copying data manually from a screen display and smuggling it out of an office, so there are practical limits to securing data beyond which additional technology is pointless.
There are some theoretical cases where trusted computing could benefit individuals. But, in practice, it's all about someone else trusting your hardware to rat you out. Most of the money flowing in to trusted computing comes from those kinds of uses. "Trusted computing" has rightly earned distrust.
Why Overlook The Cool Features (Score:2)
It would also be useful to implement true ecash schemes and in allowing true p2p based virtual worlds/games with safegaurds against cheating.
In short the technology offers a lot more promise than mere security
Re: (Score:3, Interesting)
Open vs Closed Trusted Computing (Score:5, Interesting)
Perhaps to try to remedy the confusion, we can distinguish between TC as proposed by the Trusted Computing Group [trustedcom...ggroup.org] and other forms of TC. The TCG is an industry consortium with Microsoft, Intel, HP etc., dating back several years, originally called TCPA. Their proposal has always been controversial but IMO misunderstood.
TCG's flavor of TC is fundamentally open. I would call it Open Trusted Computing, OTC. It does not lock down your computer or try to prevent anything from running. It most emphatically does NOT "only run signed code" despite what has been falsely claimed for years. What it does do is allow the computer to provide trustworthy, reliable reports about the software that is running. These reports (called "attestations") might indicate a hash of the software, or perhaps a key that signed the software, or perhaps other properties or characteristics of the software, such as that it is sandboxed. All these details are left up to the OS, and that part of the technology is still in development.
Open Trusted Computing runs any software you like, but gives the software the ability to make these attestations that are cryptographically signed by a hardware-protected key and which cannot be forged. Bogus software can't masquerade as something other than it is. Virus-infected software can't claim to be clean. Hacked software can't claim to be the original. You have trustworthy identification of software and/or its properties. This allows you to do many things that readers might consider either good or bad. You could vote online and the vote server could make sure your voting client wasn't infected. You can play online games and make sure the peers are not running cheat programs. And yes, the iTunes Music Store could make sure it was only downloading to a legitimate iTunes client that would follow the DRM rules. It's good and bad, but the point is that it is open and you can still use your computer for whatever you want.
This is in contrast to some other projects which may or may not call themselves TC but which are focused on locking down the computer and limiting what you can run. The most familiar example is cell phones. They're actually computers but you generally can't run whatever you want. The iPhone is the most recent controversial example. Now they are going to relax the rules but apparently it will still only run signed software. This new "Trusted Computing Project" is the same idea, it will limit what software can run. Rumors claim that the next version of Apple's OS X will also have some features along these lines, that code which is not signed may have to run in sandboxes and have restrictions.
This general approach I would call Closed Trusted Computing, CTC. It has many problematic aspects, most generally that the manufacturer and not the user decides which software to trust. Your system comes with a list of built-in keys that limit what software can be installed and run with full privileges. At best you can install more software but it is not a first-class citizen of your computer and runs with limitations. Closed Trusted Computing takes decisions out of your hands.
But Open Trusted Computing as defined by the TCG is different. It lets you run any software you want and makes all of its functionality equally available to anyone. P2P software, open-source software, anything can take full advantage of its functionality. You could even have a fully open-source DRM implementation that used OTC technology: DRM code that you could even compile and build yourself and use to download high-value content. You would not be able to steal content downloaded by software you had built yourself. And you could be sure there were no back doors,
Open Trusted Computing = Treasonable Computing (Score:2)
This piece of propaganda that you are spouting is indeed 'Interesting' and 'Insightful' in how clever it is.
You are right that TC only provides a signature which cannot be forged. But if you the user cannot forge the signature of the result of the cpu cycles that the computer runs - then anyone can write up software that does X and Y and Z only, ONLY, when you provide signed data to them - and wont work if you don't ...
And thats the point! That is exactly what everyone will immediately do - the banks, t
Re:Open vs Closed it's all KOOLAID (Score:2)
Or a third party has the keys, in which case I am no longer in control of my machine. It is not "My machine" anymore. I can no longer compile and run my own software. I can only run what my drm masters deem "trustworthy".
Re: (Score:2)
You could even have a fully open-source DRM implementation that used OTC technology: DRM code that you could even compile and build yourself and use to download high-value content. You would not be able to steal content downloaded by software you had built yourself.
This makes no sense to me. Am I missing something? What prevents me from editing the source code and piping the output to my favorite encoder? I can't imagine any technique that is not trivial to hack.
Yes, you're missing the ability that a TPM has to bind a secret (like a DRM decryption key) to a specific boot state. The key can be protected so that it is only available when a particular set of software is running. If you modify the code, then you change the hash of the software, and it can't get the key (actually -- it can ask, and it'll get a response, but what it gets will be garbage).
Actually doing this on a regular OS is really complicated, to the point of being practically impossible. Howeve
Re: (Score:2)
Either the DRM will be trivial to defeat, or some critical parts of the hardware/software are not actually open. (And I don't mean just the key in the TPM chip.)
Mmm, no. All of the very strongest security is designed and implemented in completely open fashion. You can have complete design specifications of your TPM, but, if it's really implemented properly, that doesn't mean you can break it.[*] It's somewhat similar to the reason that cryptographers and security experts (like me, BTW -- that's my day job) never trust secret ciphers. When I design a system, my goal is to build it so well that even someone with perfect knowledge of the system can't break it.
Re: (Score:2)
Mmm, no.
The correct answer there post was "Yes, right, critical parts of the hardware/software are not actually open.".
I am not disputing the technical accuracy of the rest of your explanation. The only conflict here is that you ignored that he spent a chunk of his post explaining what he meant by "open". In the context of his post, as he used a
Re: (Score:2)
The only conflict here is that you ignored that he spent a chunk of his post explaining what he meant by "open".
I disagree. All of the code in question can be open, in the sense that you can look at it, modify it, redistribute it and use it without limitation. All of the hardware in question can be open in the sense that you can see exactly how it works (up to and including detailed schematics), have full details of the interfaces it provides and be able to use it in any way you like -- basically, everything except modify it, which is normal since you rarely (never?) expect to be able to modify your hardware.
An
Re: (Score:2)
Correct. If you alter any of the software in the chain - from BIOS on through to the application - if you change anything then the Trusted Computing system denies access to your files... unless those modifications have been explicitly reviewed and approved and applied through a pre-approved update mechanism.
For example iTunes DRM sof
Re: (Score:2)
What is new in Open Trusted Computing is that remote servers can verify what the hash is of the software that is running, and it can't be spoofed. That means for example you could have a voting client that connects to the voting server, and the server can make sure the clie
Suppose I own a company... (Score:2)
Re: (Score:2)
I'm a programmer and I have read the 300+ page technical specs from cover to cover.
"Virus-infected software can't claim to be clean. Hacked software can't claim to be the original", how exactly is this accomplished?
I will leave out some of the multi-layered technical details, but if anything the the explanation below is likely much more detailed than you wanted, and it is adequate to demonstrate the general principles that make it possi
There is no good trusted computing (Score:2)
If I want to run OPEN SOURCE software, because I can re-compile it, because I can change it, because I can fix that bug that no one else will fix because I am one of three people in the whole world who ever see it. When I re-compile my kernel to fix that bug because I am sick to death that my laptop crashes every time I visit my Bank Site. I re compile it with the fix (or with any other change I like) Trusted computing
Re: (Score:2)
You were saying if I got the Master Key I could override any part , but if I do override then once again my machine is not trusted and I become a second class citizen on the Internet
If you have your master key they can't tell whether you override.
If you open your computer and physically extract your key, then they will think you have a Trusted Computer. If you use it to override, they won't know. You'd would be able to get you your bank's website.
Of course this would mean that ea
A good read... (Score:2)
The main problem I have with TC is the fact that it removes control over the hardware from the user and gives it to a 3rd party entity.
When I purchase hardware, I expect to have full control over it's capacities. If the hardware is capable of doing something, I should be able to do it. There's something a bit eerie about giving your computer a command/instruction and having it come back and tell you it could do it, but that it wo
Just in case: Trusted Computing film (Score:2)
Trusted Computing versus DRM: Notary in a box (Score:2)
Did you know that The TCG/TCPA specifications create a technical definition of the "owner" of device? It could be the manufacturer, the reseller, a sysadmin, a user, or someone the user loans the machine to. It all depends on who "takes ownership" (also technically defined in the spec) first. The "owner", in this sense, is the one who gets to specify which signing keys are needed to sign code that the owner wants to allow to run. This can include vendor keys, and even a user's own signing key.
Whether TC is
Re:But Linux is already trusted. (Score:5, Insightful)
Did you even read the summary? Or were you just going for first post?
This is about locking down the workstation so that users can't monkey around. I do not care how well the code is written, a malicious user can create a security issue if he/she has the ability to do so.
Re: (Score:2)
Any argument for the implementation of so-called 'trusted computing' is either inherently evil or incredibly stupid.
This is an incredibly naive and uninformed view on trusted computing. I was hesitant about trusted computing until I learned more about it. I have a TPM in my computer and it has a lot of good uses. Storing encryption keys in a tamper-proof chip is an excellent security enhancement. Software storage of keys is much more likely to be cracked. Also I can encrypt my entire drive or indivi
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
And (Score:2)
How is this redundant? (Score:2)
Re: (Score:2)
Sorry, but you can try to recognize patterns in anything including what patterns are found in compilers. It is not always easy, but hard isn't impossible. As soon as you can recognize those patterns, you can write the trojan described.
Re: (Score:2)
Good luck with that.
bypassing Thompson's trojan is simple (Score:2)
In Thompson's case, he had it scan the source for recognizable text.
Defeat the "am I compiling a compiler?" test of the compiler binary and you are done.
All you need is a source code obfuscator. Randomize variable/function/file names, and insert red-herring calling sequences and recompile the source to the compiler to obtain a non-bugged compiler binary.
Writing a source code obfuscator (capable of defeating
Re: (Score:2)
I'm not saying it's easy, I'm just saying it's possible.
Re: (Score:2)
Which activity, though, is eaiser to do? I don't know how to prove it, but I think obfuscation is far eaiser than detection.
As the Anonymous Coward replying to me pointed out, writing a program that can always detect when another program is a compiler is as hard as detecting wh
Re: (Score:2)
100% success may be virtually impossible, but 90% is probably significantly easier, and nearly as dangerous.
Re: (Score:2)
In an open source world, the defenders already have the upper hand against the attackers, because compilers like gcc are being modified so much that whatever static structure the trojan is keying off of can only last so long before it is re-written, defeating it.
My suggestion is for an additional measure that would give the defenders an even bigger advantage.
Re: (Score:2)
(I don't know if the GP meant his or her post to be a direct attack on the frequent comment that "well, you have the source and can inspect it, after all", but if he or she did, congrats.)
Re: (Score:2)
Re: (Score:3, Interesting)
Re: (Score:2)
a sufficiently motivated nigger could painstakingly review the machine code
Was that really necessary ?
Re:O RLY? (Score:4, Insightful)
Re: (Score:2)
Re: (Score:3, Informative)
You do not understand trusted computing. It is not about locking down your system.
It is a common fallacy that the primary goal of trusted computing is to enable DRM so the movie studios/RIAA controls your computer. This is simply not true. Trusted computing provides methods by which you, the owner and administrator of your computer, can KNOW, by having a chain of trust that is anchored by keys securely stored on a TPM chip soldered to the motherboard, that the software and hardware in your system has
Re: (Score:2)
I'm thinking about it, and I don't like it. I can do all my ecommerce today with a free and open system. If my bank demanded I had my OS/browser signed by some certificate authority I couldn't do that. I can't think of any use of this technology that doesn't hurt the software hobbyist.
Re: (Score:2)
The fundamental argument is not whether good or bad policies are possible, but about freedom and whether you have control over your own computer. If doing e-commerce, can I program my computer
Re: (Score:2)
If doing e-commerce, can I program my computer to lie and send back a response saying it is not tampered with even when I have changed the software? If I cannot do this, then I no longer have control over the computer and it is no longer my computer.
If you *CAN* do what you describe, then your system cannot and should not be trusted in a trusted computing transaction. Providing a provable, secure chain of trust is the fundamental reason for having a TC base. If you can arbitrarily corrupt this chain by "programming your computer to lie", then all bets are off and the trust model is irrevocably broken.
Perhaps the e-commerce use case is not the best example. Perhaps TC will never be acceptable on personal computers for general purpose uses. How
Re: (Score:2)
Actually that is precisely the functional design target of Trusted Computing, as the following will demonstrate.
Trusted computing provides methods by which you, the owner and administrator of your computer, can KNOW, by having a chain of trust that is anchored by keys securely stored on a TPM chip soldered to the motherboard, that the software and hardware in your system has no
MOD PARENT UP (Score:2)
Re: (Score:3, Informative)
Trusted computing is only a problem when YOU are not the owner of the machine and don't have the full control over the TPM module on a new computer (of course, once TPM is set up - it shouldn't be possible to change it without owner's keys).
Re: (Score:2)
i.e. when you're using services over a network. What happens when microsoft pushes their TPM out and people get used to serving pages only to trusted peers? You thought "this site only works in IE" was bad? Try "this site is cryptographically impossible to read without a full trusted IE/windows system" And it's done all in the name of security.
Re: (Score:2)
But personally, I'd like to have the same capability to be sure my system is not tampered with by NSA when they examine my laptop during in airport
I'm completely new to this TCM thing... (Score:2)
How does it work? How will it affect my machine if enabled (i.e. will I notice?)? Could an OEM (I hear Microsoft is distributing PCs nowadays) theoretically set up the TPM to lock down a system pre-purchase? What happens when the TPM blocks something/notices a different checksum?
Re:I'm completely new to this TCM thing... (Score:4, Insightful)
The kernel is signed and the hardware bootloader checks that the signature is valid (using TPM). So we can at least guarantee that the system is in consistent state during kernel loading. Later we can use numerous methods to control kernel integrity (SELinux, AppArmour, etc.).
Theoretically, Microsoft can make you to use TPM to validate their kernel during booting (because tainted kernel can be used to circumvent DRM).
So we just need to be able to turn off the TPM chip if it's not required.
Re: (Score:2)
Trusted computing is only a problem when YOU are not the owner of the machine and don't have the full control over the TPM module
You mean like, all the time? Because you'll never know the TPM root key, so if there's any TPM'd operating system/application/content you'd like to use, there's no off switch. For building a secure network you just need things signed with your private key telling your master computer, which trusts your key. There's absolutely no need to build any PKI. Instead we got a global "trusted" root that makes sure the software can trust the host, not that the host can trust the software. It's the ultimate in usage
Re: (Score:2)
The goal of TPM is to build a secure HOST. I.e. the one which I can trust to be secure during all stages (for example, TPM can guarantee that a malicious hacker has not installed a backdoor into my kernel).
No, you don't (Score:2)
Re: (Score:2)
Of course, phone manufacturers might also use TPM for Tivoisation. But it's far easier just to use a simple signed first stage bootloader for the same effect.
Re: (Score:2)
And yeah, Trusted Computing it about not trusting the user. You dont think that these companies are gonna get together and say 'We know what is best for you' at some later date when we're all stuck into the Trusted Computing format and lock us all down. Kiss Open Source goodbye because someone will make the argument that Linux cant be trusted because its Open Source and a PHB at one of these hardware companies will (stupidly) agree.
Re: (Score:2)
I'm having trouble understanding what you mean by "software freedom". Computers are provided by employers to manage tasks and handle data related to your function within the organization. Where exactly does your freedom come into play there? And what does free software do there that "Windoze" doesn't?