When designing a secure service that stores user data, you might be temped to say “let’s make sure the data is encrypted.” That statement implies that you’re proposing adding goodness, without taking anything away. Something like “I’d like some of that delicious gravy on my roast turkey, please.” Clearly, turkey with gravy is strictly better than dry turkey. Who can possibly say no to gravy?
Unfortunately, encryption is not gravy. There are deep consequences to the product you’re building once you choose to encrypt data, and the consequences differ greatly depending on the key management mechanism you choose. I wrote about this in part in my previous crypto-realism post, encryption is not magic:
For the most part, encryption isn’t magic. Encryption lets you manage secrets more securely, but if users are involved in the key management, that almost certainly comes at the expense of usability and features.
That last point bears repeating: if you design a system with encryption where users manage keys, you’re going to lose features. You want gravy on that turkey? Sorry, no stuffing for you. “What?” you say. But I want my stuffing and my gravy! I want to believe I can have it all!
I’ve been there, and I’ve made that mistake. A few times. Every fan of cryptography, everyone who’s ever glimpsed that awesomeness and beauty, wants to believe that they can have all their features and all their crypto. They want to believe so badly that, when the glaring missing features come to light, denial is often the only escape. I’ve done this in my work on voting and health-data systems. End-user crypto was appropriate, I thought, users would get it, I thought. It’s their vote, their health data! How much more important does it get? If users don’t care about crypto and accept some inconvenience in those cases, when will they ever care?
Exactly. Users won’t accept inconvenience. Because the inconveniences aren’t small. If users are left to manage crypto keys, that means their data disappears if they lose the key. This is a mathematical certainty that makes absolutely no sense to non-cryptographers. Even safe deposit boxes can be forced open if you lose your key. The most expensive cars have unlocking fallbacks. There is nothing in the intuitive physical world that maps to the helplessness of losing your crypto key.
Now don’t get me wrong. You can still use crypto. The voting system I’ve proposed still does, and I’m not arguing it shouldn’t. I’m only saying that, whatever crypto you use, involving the user as an agent in the cryptographic protocol is bound result in significant usability limitations, often deal-breakers.
so what are my options
Broadly speaking, you have three:
- full-strength, randomly generated, user-managed key. This is the most secure setting. Access to the full server data gives the attacker no useful information. Unfortunately, it is also the most difficult to use. Enabling a new device requires coordination with an existing device. If users lose all of their devices, e.g. if they only have one device and it breaks, there’s no way to recover.
- password-derived key. The data is encrypted with a key derived from the user’s password. This is not as secure as the previous setting, since most user passwords are not nearly as strong as full-strength crypto keys. However, as my colleague Brian Warner is exploring, it may be possible to still make it quite expensive to break into a single user’s dataset, and prohibitively expensive to go fishing for data across many user accounts. Usability is significantly increased: a user can set up a new device simply by typing in their password. However, the crypto conundrum remains: lose your password, lose your data.
- server-side security. Users don’t manage keys, and servers technically have access to the user data. A number of techniques can be used to meaningfully restrict the chance of a leak (e.g. disk encryption or other type of encryption where the server holds the key somewhere.) Security against insider attackers is not nearly as high as with the two previous solutions. This is, of course, how almost every service on the Internet works today. It is the only model that maps to user intuition, where a user can forget their password, lose their devices, and still recover. Because that’s how the physical world works.
where Mozilla fits in
We’re having important discussions about these issues right now. I gave a talk about these issues a few weeks ago:
Firefox Sync fits the first, most secure model, but suffers because it is not the product that users think it is. In particular, most users of Sync use it as backup. In practice, it’s not: lose your device, lose your data. It’s also impossible to set up a new device without coordinating with an existing device. We are looking at providing new services based on password-derived keys, so that users can set up a new device with just their password and, if they remember their password, recover from complete device failure.
There remains a problem if users lose their device and forget their password. How often might that happen? I fear too often to make this solution complete for all data types. We will probably have to solve this by providing different, configurable levels of security, with corresponding feature compromises.
The discussion is ongoing. We have to keep in mind that the dilemma is very real. End-to-End Encryption is not gravy. We cannot have it for free. The choice of encryption architecture is as much a product decision as it is a technical one.