A couple days ago I came across The Digital Imprimatur, an article from 2003 warning about the dangers of restoring user identity on the internet. Not realizing it was nearly 10 years old, it aroused some serious concerns in me about the possibility of requiring every user to be authenticated. But then I sat down, and thought about the technology of it, as well as the economics.
FUD had clouded my thinking. Here is what the founder of AutoCad, who wrote that document in 2003, was missing: both users and networks have a choice. Once again, the solution is decentralization. And that is largely what happened — in nature, as in human affairs, centralization is very hard to maintain.
Here are my positions:
- For one thing, I like that there is a dichotomy of users and servers. For many things, this is important. But I would rather say that there are users, and there are networks.
- I don’t like anonymity for everything because it has serious drawbacks (spam, people can create unlimited accounts, engage in illegal trafficking etc.)
- But at the same time I don’t like the possibilities that arise from everyone being forced to use some officially issued certificate.
And here are the conclusions I arrived at:
1. Eliminating Spam: Any network, which is concerned about user account spam, simply needs to tie them to something expensive (e.g. a cell phone line that can receive SMS). But it doesn’t have to be traceable — for example, it can be tied to bitcoins or some other currency based on solving difficult mathematical problems with a finite solution space. Anonymity of the account’s owner can still be preserved while eliminating spam.
2. Reputations: A user can still create fake accounts (e.g. for the purposes of anonymity), but each account will have a reputation and be traceable throughout the network where the account exists. So the cost to this user of ruining their reputation (by trolling, or being dishonest, or a myriad of other drawbacks of untraceability) would rise the more the user invested into their account.
** HERE, by the way, we should have law enforcement for demonstrable breaches of privacy and security policies. Notice that privacy and security is closely tied to identity. For example, Apple and Amazon recently had major security problems stemming from their policies about identity … I say we need law enforcement rather than merely just some anarchist idea of reputations because small, fly-by-night companies may not care about their reputation and may violate their privacy policies more frequently than large corporations like Apple **
4. Certificates: It is the networks that should have certificates, so the users know who they are connecting to.
Any network could obtain its certificate from an agency that the USERS TRUST. This is already happening with e-commerce. It doesn’t have to be a government, necessarily. At the end of the day, though, the more people trust the agency that issues the certificate, the more people will trust the certificate.Networks such as google that become well-known enough can issue their own identity certificate, acting as their own certificate authority.
Networks would use their certificates to sign information they believe to be true at the time of signature, so that anyone can verify this information without having to query the network, even years later.
5. User certificates: All the verification described in step 3 can be exported by the network to others using certificates. The user can download a certificate showing that they are indeed “Bill Gates according to Google’s verification” or that their medical history is indeed “verified by hospital X at some Y point in time.”
In fact, these signatures can verify entire histories from various different users on various different networks — with entries such as “doctor X saw medical history at point Y and made diagnosis Z.” At point Y, the doctor trusted your medical history from other networks / institutions they respected. They signed not only their diagnosis but the fact that they are doctor X, and they saw your medical history at point Y, etc.
6. Signed software: Certificate holders would be able to sign software that they release. Operating systems and browsers would be able to revoke trust in the software if it is found to be malicious or contain serious security bugs. There would be accountability for software writers who write viruses, have irresponsible security etc. proportional to the cost of obtaining another identity in a trusted network.
In the App Stores (pioneered by Apple, and now cropping up everywhere), software is signed before being “put on the shelf”. This is just the beginning, but in the future, there could be lots of competing app stores and networks certifying software for every platform. Antivirus companies would have a valuable role in testing for security / malicious software and recommending revoking this or that certificate that the software is safe.
Revoking the certificate of certain software does not mean that the users have to lose all confidence in the vendor. In fact, the app store or security company or white hat hacker can contact the vendor with the vulnerability, and allow them to quietly fix it if they believe the vendor to have made a good-faith mistake and did not intend to write a virus / spyware. A responsible time frame for an update can be set before the security flaw is publicized. If the vendor releases the update in time, then all users will see is that version X has a security flaw (and threat level), but there is already a newer version submitted by the vendor. Thus, the vendor’s reputation may actually increase because of their responsiveness, and software will not need to be “pulled off the shelf”.
7. Software on the web: Currently, the way web browsers work, we have to trust whatever is delivered to our web browser by the server. Browsers should start being able to verify the signature of web resources they download. If the server claims that a given resource has been verified by some network, the browser should be able to verify it with that network’s certificate.
In addition, users can be tricked into providing their credentials (such as passwords) to any malicious web site, which simply emulates an interface from their trusted site (such as a facebook login). Right now, this is solved with popups, but a much more elegant solution would be to allow some iframe to have the highest z-order (i.e. “be on top of everyting”) so nothing can hijack the user’s input into it.
I make both proposals here in more detail:
In fact, right now entire operating systems like the MacOS have the same problem. Any application can spoof the system’s administrator credentials dialog and capture the user’s root password, using it to take over the system. This can be easily fixed by having the system ask you to enter some favorite phrase of yours when you first install it, and then showing it back to you in the credentials dialog. All Apple would have to do is make sure the dialog is on top of everything, and apps can’t capture a screenshot of what’s inside — just like they do for DRM movies.
An aside: I once emailed Steve Jobs about this, but didn’t hear back… if there was a security company for operating systems, I would report it there and Apple would have a time frame in which to fix this exploit before it was publicized
8. Patents and Governments: Well, since things are decentralized, and patents/copyright rely on centralized systems (governments) and agreements between them (treaties, etc.) the situation is a toss up. I would say that, in general, since in any given system ultimately trust is usually concentrated in at most a few popular entities that have the resources to actually verify the software (e.g. all competing App Stores for mac), it won’t be tough for a government to intimidate these entities into revoking a software’s certificate.
Unless, of course, we combine part 2. untraceable accounts with reputations, with part 6. signing software, and get “shadow organizations with reputation for verifying software for security holes” … which might be useful for verifying things like whether freenet or perfectdark is still secure. Then, governments wouldn’t be able to stop the distribution of the software, nor force these untraceable organizations to revoke the certificate — fooling the users — and yet the software can still be audited in a meaningful way by the community.
In any case, all these things are side effect of centralizing trust in people/companies with good reputations — whether they are traceable or not. In the future, we may figure out better ways to distribute trust across the entire network. Bitcoin is an early step in that direction, I think.
When I first read the The Digital Imprimatur, I thought was a recent article. It certainly could seem that way, given the concerns we have today, almost 10 years later. With today’s discussions about government spying on its citizens with drones and other things, the right of the people to peaceably assemble must be protected, and indeed some non-democratic governments were overthrown as people used the internet to organize. In repressive regimes, darknets can be used by people to communicate freely, and the same tools are used by people for notorious purposes such as trafficking drugs. Suppose human trafficking took place and we couldn’t find out who was doing it. How much anonymity should a system allow? These are difficult questions.
When copyright gets involved, the USA and other parties to the Berne Convention sometimes propose (and pass) draconian regulations, or simply take down websites irresponsibly or take down entire businesses before a trial has taken place. Technology such as DRM certainly has some legislative muscle behind it.
But as long as there are alternatives available to people, as long there are decentralized choices, we should be fine.
I hope that some of the suggestions in this article are ultimately implemented, because I think good things await us if we move in those directions.
- Gregory Magarshak