Friday, December 16, 2005

Trusted Platform Module and Loss of Anonymity on the Internet

Saw this originally at SayUncle, though the topic has been around for a while. I first heard of it at Schneier back in August.

The thing I don't like about the article is that it seems to make this sound like it's a foolproof security device. As soon as someone tries that claim, you can be certain that someone will hack the technology and abuse it in some way.
Already over 20 million PCs worldwide are equipped with a tiny security chip called the Trusted Platform Module, although it is as yet rarely activated. But once merchants and other online services begin to use it, the TPM will do something never before seen on the Internet: provide virtually fool-proof verification that you are who you say you are.
Then there is this:
With a TPM onboard, each time your computer starts, you prove your identity to the machine using something as simple as a PIN number or, preferably, a more secure system such as a fingerprint reader. Then if your bank has TPM software, when you log into their Web site, the bank'’s site also "reads"” the TPM chip in your computer to determine that it'’s really you. Thus, even if someone steals your username and password, they won'’t be able to get into your account unless they also use your computer and log in with your fingerprint. (In fact, with TPM, your bank wouldn'’t even need to ask for your username and password -— it would know you simply by the identification on your machine.)
Yeah, it will prove that anyone with access to the PIN or the machine will have access to the identification. How many people have multiple users on a system? Will the identifier change with the user? What happens if you sell your system?

The concern of anonymity is partly justified though.
Ultimately the TPM itself isn'’t inherently evil or good. It will depend entirely on how it'’s used, and in that sphere, market and political forces will be more important than technology. Users will still control how much of their identity they wish to reveal — in fact, for complex technical reasons, the TPM will actually also make truly anonymous connections possible, if that'’s what both ends of the conversation agree on. And should a media or software company come up with overly Draconian restrictions on how its movies or music or programs can be used, consumers will go elsewhere. (Or worse: Sony overstepped with the DRM on its music CDs recently and is now the target of a dozen or so lawsuits, including ones filed by California and New York.)
Anonymity will come down to both sides agreeing to allow the users to remain anonymous. This controllability is mentioned in a quote in the Schneier blog entry.
Controllability: Each owner should have effective choice and control over the use and operation of the TCG-enabled capabilities that belong to them; their participation must be opt-in. Subsequently, any user should be able to reliably disable the TCG functionality in a way that does not violate the owner's policy.
I suppose that the assumption will be that the TCG control is on by default. I'd also make the immediate assumption that Micro$oft will not allow you to use certain software if you don't have it on. That is a quandry for the user. I'd also expect that Micro$oft will make it extremely painful to disable.

Schneier also points out that the standards document does clearly take a stand on "coercive use" of the technology, which is pretty much my expectations of the implementations by companies like Micro$oft.
I like that the document clearly states that coercive use of the technology -- forcing people to use digital rights management systems, for example, are inappropriate:
The use of coercion to effectively force the use of the TPM capabilities is not an appropriate use of the TCG technology.

I like that the document tries to protect user privacy:

All implementations of TCG-enabled components should ensure that the TCG technology is not inappropriately used for data aggregation of personal information/
And to my point about Micro$oft:
But there's something fishy going on. Microsoft is doing its best to stall the document, and to ensure that it doesn't apply to Vista (formerly known as Longhorn), Microsoft's next-generation operating system.

The document was first written in the fall of 2003, and went through the standard review process in early 2004. Microsoft delayed the adoption and publication of the document, demanding more review. Eventually the document was published in June of this year (with a May date on the cover).

So, the new Micro$oft O/S should be veiwed very skeptically by anyone concerned over privacy.

Interesting technology, but I'd remain concerned about abuse of such systems.

No comments: