there are 3 kinds of crypto

When we use terminology that is too broad, too coarse-grained, we make discussion more difficult. That sounds obvious, but it’s easy to miss in practice. We’ve made this mistake in spades with crypto. Discussing the field as one broad topic is counter-productive and leads to needless bickering.

I see 3 major kinds of crypto: b2c crypto, b2b crypto, and p2p crypto. I suggest that we use this terminology consistently to help guide the discussion. We’ll spend less time talking about differences in our assumptions, and more time building better solutions.

b2c crypto

Business-to-Customer Crypto (b2c) is used to secure the relationship between an organization and a typical user. The user roughly trusts the organization, and the goal of b2c crypto is to enable that trust by keeping attackers out of that relationship. Both the organization and the user want to know that they’re talking to each other and not to an impostor. The organization is usually acting like an honest-but-curious party: they’ll mostly do what they promise. The b2c-crypto relationship is common between Internet service providers (in the broad sense, including Google, Amazon, etc.) and typical Internet users, as well as between employees and their employer’s IT department.

Web-browser SSL is a great example of b2c crypto. Users start with a computer that has at least one web browser with a set of root certs. Users can continue using that browser or download, over SSL secured by those initial root certs, another browser they trust more. Users then trust their preferred browser’s security indicators when they shop on Amazon or read their Gmail.

A critical feature of b2c crypto is that users don’t ever manage crypto keys. At best they manage a password, and even then they’re generally able to reset it. Users make trust decisions based on brands and hopefully clear security indicators: I want a Mac, I want to use Firefox, and I want to shop on Amazon but only when I see the green lock icon.

b2b crypto

Business-to-Business (b2b) crypto is used to secure the relationship between organizations, two or more at a time. There are two defining characteristics of b2b crypto:

  • all participants are expected to manage crypto keys
  • end-users are generally not involved or burdened

DKIM is a good example of b2b crypto. Organizations sign their outgoing emails and verify signatures on incoming emails. Spam and phishing are reduced, and end-users see only the positive result without being involved in the process. Organizations must maintain secret cryptographic keys for signing those emails and know how to publish their public keys (usually in DNS) to inform other organizations.

OAuth qualifies as b2b crypto. Consumers and Producers of Web APIs establish shared secret credentials and use them to secure API calls between organizations.

Another good example is SSL certificate issuance. Both the web site seeking a certificate and the Certificate Authority have to generate and secure secret keys. The complexity of the certification process is mostly hidden from end-users.

p2p crypto

Peer-to-Peer (p2p) crypto is used to secure communication between two or more crypto-savvy individuals. The defining characteristic of p2p crypto is that the crypto-savvy individuals trust no one by default. They tend to run code locally, manage crypto keys, and assume all intermediaries are active attackers.

PGP is a great example of p2p crypto. Everyone generates a keypair, and by default no one trusts anyone else. Emails are encrypted and signed, and if you lose your secret key, you’re out of luck.

so how does this help?

This naming scheme provides a clear shorthand for delineating crypto solutions. Is your wonderful crypto solution targeted at the general public? Then it’s probably a combination of b2c crypto for users and b2b crypto for organizations that support them. Are you building a specialized communications platform for journalists in war zones? Then it’s probably p2p crypto.

The implementation techniques we use for various kinds of crypto differ. So when some folks write that Javascript Crypto is considered harmful, I can easily respond “yes, dynamically-loaded Javascript is a poor approach for p2p crypto, but it’s great for b2c crypto.” In fact, when you look closely at a similar criticism of Javascript crypto from Tony Arcieri, you see this same differentiation, only with much more verbiage because we don’t have clear terminology:

Before I keep talking about where in-browser cryptography is inappropriate, let me talk about where I think it might work: I think it has great potential uses for encrypting messages sent between a user and the web site they are accessing. For example, my former employer LivingSocial used in-browser crypto to encrypt credit card numbers in-browser with their payment processor’s public key before sending them over the wire (via an HTTPS connection which effectively double-encrypted them). This provided end-to-end encryption between a user’s browser and the LivingSocial’s upstream payment gateway, even after HTTPS has been terminated by LivingSocial (i.e. all cardholder data seen by LivingSocial was encrypted).

This is a very good thing. It’s the kind of defense that can prevent the likes of the attack against Target’s 40M customers last month. And that’s exactly the point of b2c crypto.

most users can’t manage crypto keys

I use the term p2p crypto because I like to think of it as “Pro-to-Pro.” Expecting typical Internet users to engage in p2p crypto is, in my opinion, a pipe dream: typical users can’t manage secret crypto keys, so typical users must rely on organizations to do that for them. That’s why successful general-public crypto is a combination of b2c crypto between individuals and the organizations they choose to trust, and b2b crypto across organizations. More expertise and care is expected of the organizations, little is expected of individual users, and some trust is assumed between a user and the organizations they choose.

You don’t have to agree with me on this point to agree with the nomenclature. If you’re interested in protocols where individuals manage their own secret keys and don’t trust intermediaries, you’re interested in p2p crypto. I happen to think that p2p crypto is applicable only to some users and some specific situations.

on cooking turkey and solving problems

On Thursday, my wife and I hosted our 10th Thanksgiving. We both enjoy cooking and baking, though we remain clearly amateurs and tend to make it up as we go along. There was that one time we realized, the night before Thanksgiving, that a frozen 15-pound turkey requires 3 days to defrost in the fridge. I stayed up most of the night, soaking the bird in the bathtub.

We’ve gotten better over time: she focuses on stuffing and cranberry sauce, me on turkey and dessert, and we collaborate on some kind of sweet potato dish. The stress almost always comes around how long to roast the turkey and whether it’s fully cooked. For the last 4 years, we’ve had the added (but wonderful) complexity of little kids eager to eat. We had great luck once with a high-heat plus start-breast-side-down combination, but we were never able to recreate that success.

This year, I reached out on twitter:

I received this recommendation from a former student and fellow web hacker:

“What in the world is spatchcocking,” I thought. I was ready to try it after this video:

(The New York Times also has a nice take on the technique.)

Roughly, spatchcocking involves cutting out the turkey’s backbone, then breaking open the bird and laying it out flat. One layer of meat, all on the same plane, with the dark meat slightly protecting the white meat, which is what you want since white meat cooks faster. The technique promises shorter cook times, more even cooking, and juicier meat.

Turns out, it’s all absolutely true. Preparation was easy and eminently repeatable, with little risk of screwing things up. The bird cooked in about 2 hours, where typically it would have required 4. The whole turkey cooked at the same speed. The result: amazing fully cooked dark meat, juicy white meat, perfectly crispy skin, and plenty of oven time left for an apple pie and stuffing. Everyone at the table agreed: this was the best turkey I’ve ever cooked by far. Even the little kids ate triple portions.

So what’s the downside? Well, people claim there are two: (a) you can’t stuff the turkey and (b) you can’t present a typical, whole roasted turkey. Instead you’ve got a weird flat thing that indicates you got really angry in the kitchen. Neither of these matters to me, and I’ll go out on a limb and say they should matter little to most people: stuffing a raw turkey significantly increases the risk of food poisoning, and, as it turns out, being forced to carve the turkey before presenting it made serving the meal much easier.

So, first lesson: I will only cook spatchcock turkeys from now on.

And second lesson: even after 10 years of doing something, it’s still possible to find a solution that is faster, simpler, and better, with no real downsides. What’s crazy is that the solution is already out there, used by some, just not widely. Crazier still is that many people know about it, they just refuse to try it because there are “downsides” or the solution is unusual.

But what if the downsides are rhetorical at best? What if it’s really all upside?

I can’t help but link this to software engineering and problem-solving more broadly. There are so many technical solutions we simply accept as necessary and necessarily hard. We fail to search for simpler solutions, even when they already exist. Or if we know about them, we choose to ignore them because they seem too simple, too good to be true. We make up excuses, we make up theoretical downsides.

Why not stick to simple? There’s not necessarily a real tradeoff. Sometimes, even often, you can do faster, simpler, and better. I’m going to work to keep that in mind in everything I do. Kindergarden selection for my kids. Financial planning. And especially software.

Before going complicated, have you tried spatchcocking? The result might just be delicious.

Letter to President Obama on Surveillance and Freedom

Dear President Obama,

My name is Ben Adida. I am 36, married, two kids, working in Silicon Valley as a software engineer with a strong background in security. I’ve worked on the security of voting systems and health systems, on web browsers and payment systems. I enthusiastically voted for you three times: in the 2008 primary and in both presidential elections. When I wrote about my support for your campaign five years ago, I said:

In his campaign, Obama has proposed opening up to the public all bill debates and negotiations with lobbyists, via TV and the Internet. Why? Because he trusts that Americans, when given the tools to see and understand what their legislators are doing, will apply pressure to keep their government honest.

I gushed about how you supported transparency as broadly as possible, to enable better decision making, to empower individuals, and to build a better nation.

Now, I’m no stubborn idealist. I know that change is hard and slow. I know you cannot steer a ship as big as the United States as quickly as some would like. I know tough compromises are the inevitable path to progress.

I also imagine that, once you’re President, the enormity of the threat from those who would attack Americans must be overwhelming. The responsibility you feel, the level of detail you understand, must make prior principles sometimes feel quaint. I cannot imagine what it’s like to be in your shoes.

I also remember that you called on us, your supporters, to stay active, to call you and Congress to task. I want to believe that you asked for this because you knew that your perspective as Commander in Chief would inevitably become skewed. So this is what I’m doing here: I’m calling you to task.

You are failing hard on transparency and oversight when it comes to NSA surveillance. This failure is not the pragmatic compromise of Obamacare, which I strongly support. It is not the sheer difficulty of closing Guantanamo, which I understand. This failure is deep. If you fail to fix it, you will be the President principally responsible for the effective death of the Fourth Amendment and worse.

mass surveillance

The specific topic of concern, to be clear, is mass surveillance. I am not concerned with targeted data requests, based on probable cause and reviewed individually by publicly accountable judges. I can even live with secret data requests, provided they’re very limited, finely targeted, and protect the free-speech rights of service providers like Google and Facebook to release appropriately sanitized data about these requests as often as they’d like.

What I’m concerned about is the broad, dragnet NSA signals intelligence recently revealed by Edward Snowden. This kind of surveillance is a different beast, comparable to routine frisking of every individual simply for walking down the street. It is repulsive to me. It should be repulsive to you, too.

wrong in practice

If you’re a hypochondriac, you might be tempted to ask your doctor for a full body MRI or CT scan to catch health issues before detectable symptoms. Unfortunately, because of two simple probabilistic principles, you’re much worse off if you get the test.

First, it is relatively unlikely that a random person with no symptoms has a serious medical problem, ie the prior probability is low. Second, it is quite possible — not likely, but possible — that a completely benign thing appears potentially dangerous on imaging, ie there is a noticeable chance of false positive. Put those two things together, and you get this mind-bending outcome: if the full-body MRI says you have something to worry about, you actually don’t have anything to worry about. But try convincing yourself of that if you get a scary MRI result.

Mass surveillance to seek out terrorism is basically the same thing: very low prior probability that any given person is a terrorist, quite possible that normal behavior appears suspicious. Mass surveillance means wasting tremendous resources on dead ends. And because we’re human and we make mistakes when given bad data, mass surveillance sometimes means badly hurting innocent people, like Jean-Charles de Menezes.

So what happens when a massively funded effort has frustratingly poor outcomes? You get scope creep: the surveillance apparatus gets redirected to other purposes. The TSA starts overseeing sporting events. The DEA and IRS dip into the NSA dataset. Anti-terrorism laws with far-reaching powers are used to intimidate journalists and their loved ones.

Where does it stop? If we forgo due process for a certain category of investigation which, by design, will see its scope broaden to just about any type of investigation, is there any due process left?

wrong on principle

I can imagine some people, maybe some of your trusted advisors, will say that what I’ve just described is simply a “poor implementation” of surveillance, that the NSA does a much better job. So it’s worth asking: assuming we can perfect a surveillance system with zero false positives, is it then okay to live in a society that implements such surveillance and detects any illegal act?

This has always felt wrong to me, but I couldn’t express a simple, principled, ethical reason for this feeling, until I spoke with a colleague recently who said it better than I ever could:

For society to progress, individuals must be able to experiment very close to the limit of the law and sometimes cross into illegality. A society which perfectly enforces its laws is one that cannot make progress.

What would have become of the civil rights movement if all of its initial transgressions had been perfectly detected and punished? What about gay rights? Women’s rights? Is there even room for civil disobedience?

Though we want our laws to reflect morality, they are, at best, a very rough and sometimes completely broken approximation of morality. Our ability as citizens to occasionally transgress the law is the force that brings our society’s laws closer to our moral ideals. We should reject mass surveillance, even the theoretically perfect kind, with all the strength and fury of a people striving to form a more perfect union.


Mr. President, you have said that you do not consider Edward Snowden a patriot, and you have not commented on whether he is a whistleblower. I ask you to consider this: if you were an ordinary citizen, living your life as a Law Professor at the University of Chicago, and you found out, through Edward Snowden’s revelations, the scope of the NSA mass surveillance program and the misuse of the accumulated data by the DEA and the IRS, what would you think? Wouldn’t you, like many of us, be thankful that Mr. Snowden risked his life to give we the people this information, so that we may judge for ourselves whether this is the society we want?

And if there is even a possibility that you would feel this way, given that many thousands do, if government insiders believe Snowden to be a traitor while outsiders believe him to be a whisteblower, is that not all the information you need to realize the critical positive role he has played, and the need for the government to change?

the time to do something is now

I still believe that you are, at your core, a unique President who values a government by and for the people. As a continuing supporter of your Presidency, I implore you to look deeply at this issue, to bring in outside experts who are not involved in national security. This issue is critical to our future as a free nation.

Please do what is right so that your daughters and my sons can grow up with the privacy and dignity they deserve, free from surveillance, its inevitable abuses, and its paralyzing force. Our kids, too, will have civil rights battles to fight. They, too, will need the ability to challenge unjust laws. They, too, will need the space to make our country better still.

Please do not rob them of that opportunity.


Ben Adida

security is hard, let’s improve the conversation

A few days ago, a number of folks were up in arms over the fact that you can see saved passwords in your Google Chrome settings. Separately, a few folks got really upset about how Firefox no longer provide a user interface for disabling JavaScript. These flare-ups make me sad, because the conversations are often deeply disrespectful, with a tone implying that there was obvious negligence or stupidity involved. There’s too little subtlety in the discussion, not enough respectful exchange.

Security is hard. I don’t mean that you have to work really hard to do the right thing, I mean that “the right thing” is far from obvious. What are you defending against? Does your solution provide increased security in a real-world setting, not just in theory? Have you factored in usability? Is it security theater? And is security theater necessarily a bad thing?

These are subtle discussions. Let’s discuss openly and respectfully. Let’s ask questions, understand threat model differences, and contribute to improving security for real. In particular, let’s take into account typical user behavior, which can easily negate the very best security in favor of convenience.

Let’s talk examples.

writing your passwords down

Recently, I had to create a brand new complicated password. I pulled out a sheet of paper, thought of a password, wrote it down, and put the piece of paper in my wallet. Someone said to me “did you just write that password down?” I said yes. The snarky response came back: “you should never write passwords down.” Maybe you’ve said this yourself, to a relative, friend, or co-worker?

Except it’s not that simple. Bruce Schneier recommends writing down your passwords so you’re not tempted to use one that’s too simple in order to remember it. Oftentimes, you should be more worried about the remote network attacker than people who have physical access to your machine.

But don’t feel bad about it. You’re not stupid for telling your poor aging parents to pick long impossible-to-remember passwords and then never write them down. That’s what many experts said for years. This stuff is hard. It’s worth discussing, exploring, and finding the appropriate balance of security and convenience for the application at hand. The answer won’t be the same for everyone and everything.

Google Chrome passwords

Yes, it’s true, you can, in a few seconds, view in cleartext all the passwords saved within a Google Chrome browser. But did you know you can do it in Firefox and Safari, too? With just about the same number of clicks? Are you having second thoughts about your immediate gut reaction of pure disgust at Chrome’s apparent sloppiness?

There are good reasons why you might legitimately want to read your passwords out of your browser. Most of the time, if you give your computer to someone you don’t trust, you’re kind of screwed anyways. But it’s subtle. It’s not quite the same thing to have access to your computer for a few minutes and to actually have your password. In the first case, someone can mess with your Facebook profile for a few seconds. In the second, they can get your password and log in as you on a different machine, wreaking havoc on your life for an extended period of time. So maybe it’s worth a discussion, maybe you can’t play security reductionism. Maybe the UI to view your passwords shouldn’t exist.

Would that then be security theater, since, as Adrienne Felt points out, you can install an extension that opens up a bunch of tabs and lets the password manager auto-fill them all, then steals the actual passwords? Maybe. It’s worth a discussion. In fact I like the discussion Adrienne, Joe, and I are having: it’s respectful and balanced, though limited by Twitter.

Is this fixed by Firefox’s Master Password? Sort of, if you believe that addressing the problem for a tiny percentage of the population is a “solution,” and if you assume those users will know to quit their browser every time they leave their computer unattended. Still, it’s worth pointing out the Master Password solution and evaluating its real-world efficacy.

Disabling Javascript in Firefox

As of version 23, Firefox has removed the user interface that lets a user turn off Javascript, and some folks call that lame. Why is Firefox removing user choice?

OK, so let’s consider the average Web user. Do they know what “disabling Javascript” does? If they do, is it much harder for them to use an add-on like NoScript? If they don’t, what is the benefit of offering that option, knowing that too many options is always a bad thing? Some people believe Javascript is so integral to the modern Web that disabling it is as sensible as disabling images, iframes, or the audio tag. Others believe the Web should always gracefully degrade and be fully functional without Javascript.

This is a very reasonable discussion to have. The answer isn’t obvious. My opinion is that Javascript is part of the modern Web, giving users a blunt “disable Javascript” button is practically useless, and add-ons are a fine path if you want to surf the Web with one hand tied behind your back. I have no beef with anyone who disagrees with me. I do have a beef with people who call this decision obviously stupid and see only downsides.

The Web is not that simple. Security is not that simple. And people, most importantly, are not that simple.

Let’s build a better way to discuss security. Never disrespectful, always curious. That’s how we improve security for everyone.


In about a month, I’ll be starting at Square as a Tech Lead on a new project. I’m incredibly excited for a few key reasons:

  1. team: oodles of amazingly sharp people. The interview process was simply amazing, both in how much it forced me to demonstrate as an engineer and in how much I learned about the existing team. I know I’m going to learn a ton. It’s also really nice to see Square’s engineering team contributing significant open-source code.
  2. product: it’s hard to think of a more product-focused company. The Square products (Register, Wallet, Cash, Market) are amazing. The focus on user experience is central to every conversation, and it shows.
  3. mission: Square wants to make commerce easy for businesses of all sizes. This translates in particular into major opportunities for small businesses. And this, in my mind, is what technology is for: to empower the little guys.

For the first time in a long time, my job will require a bit of secrecy. That will be an interesting adjustment for me. On this blog, I’ll continue to write what I think — not what my employer thinks — about technology, policy, etc.

For now, back to vacation. Square team: see you mid August!


Today is my last day at Mozilla. It’s been an amazing ride, and I’m incredibly proud of the Identity Team and of the work we produced together, notably Persona. The team and project are now in the incredibly capable hands of my friend Lloyd Hilaiel. I expect to see continued fantastic work from this team, and I’ll miss everyone dearly. Mozilla is a special place, and I’m grateful I had the chance to experience it firsthand.

I’ll be taking a break for a few weeks. You might see me on this blog and on Twitter from time to time, and I might even tend to Helios Voting a little bit, which has gotten far too little love from me lately. But mostly, I’ll be reading, relaxing, spending time with family. I’m excited about what comes next, and I’ll talk about that more in a few days.

no user is an island

US government agencies appear to be engaged in large-scale Internet surveillance, using secret court orders to force major Internet companies to provide assistance. The extent of this assistance is a topic of debate. What’s clear, though, is that the process itself is opaque: it’s impossible to know how broad or inappropriate the surveillance may be.

OK, so what do we do about it?

told you so, never shoulda trusted the Cloud

Some folks see this as vindication: we never should have trusted the Cloud. Only trust yourself, generate your own keypairs, encrypt all traffic, host your own email, etc. Servers are evil and should be considered leaky stupid passthroughs for fully encrypted data.

First, this is naive. If government agencies believe they have the authority to monitor all Internet traffic, would they hesitate to create viruses that infect and monitor endpoints? Would they hesitate to force software and hardware vendors to build secret backdoors into their products? It is the engineer’s mistake to believe that Law Enforcement will stop cleanly at technical abstraction layers. If the goal is total surveillance, the financial means immense, the arm-twisting strength unlimited, the oversight inexistent.. what would you do in their position?

Second, if, like me, you agree that technology experts have a duty to build solutions that matter to laypeople, it’s also irresponsible. None of these paranoid solutions are accessible to laypeople. Can you imagine Grandpa with his fingerprint-activated USB-key holding his RSA-2048-bit secret key and surfing the Web via Tor proclaiming “not me, I will fight the man!” Yeah. (And if you’re thinking “no Grandpa, not RSA! Elliptic curves!” well, thank you for making my point for me.)

So enough with this la-la land of users as fortified islands communicating via torpedo-proof-ciphertext-carrying submarines. People engage with others by way of intermediaries they trust, for that is the basis of all human interaction and commerce since the dawn of time. Let us build systems, both technical and legal, that start there.

protect user data wherever it lives

We can build systems that start with respect for the user and her data, wherever it lives. On Facebook servers, on Google servers, on self-hosted servers, on private computers. Encrypted or not encrypted. We can and should use cryptography to secure channels from those who would disrespect user data, reduce data collection to that which is useful, and generally build defense in depth against bad actors. We should stop wasting time on systems that impose the resulting complexity on users. Government access to user data should follow a clear, transparent process that is consistent wherever the data happens to be stored, however it happens to be encrypted.

Let’s build that system together. Not by barricading ourselves on our lonely islands of encryption and onion-routing. But by building the legal and technical framework we need to respect users and their data. Mozilla and Google have started. I’m hopeful many more will join.

a hopeful note about PRISM

You know what? I’m feeling optimistic suddenly. Mere hours ago, all of us tech/policy geeks lost our marbles over PRISM. And in the last hour, we’ve got two of the most strongly worded surveillance rebuttals I’ve ever seen from major Internet Companies.

Here’s Google’s CEO Larry Page:

we provide user data to governments only in accordance with the law. Our legal team reviews each and every request, and frequently pushes back when requests are overly broad or don’t follow the correct process. Press reports that suggest that Google is providing open-ended access to our users’ data are false, period. Until this week’s reports, we had never heard of the broad type of order that Verizon received—an order that appears to have required them to hand over millions of users’ call records. We were very surprised to learn that such broad orders exist. Any suggestion that Google is disclosing information about our users’ Internet activity on such a scale is completely false.

And here’s Mark Zuckerberg of Facebook:

Facebook is not and has never been part of any program to give the US or any other government direct access to our servers. We have never received a blanket request or court order from any government agency asking for information or metadata in bulk, like the one Verizon reportedly received. And if we did, we would fight it aggressively. We hadn’t even heard of PRISM before yesterday.

Both companies emphasize government data requests transparency as a critical component of moving forward. I couldn’t agree more. We need to know about every legal process in place that gives government access to private user data.


Could PRISM mark a tech world epiphany that users care about privacy? I hope so. It certainly seems that major PR departments think so. 24-hour unequivocally worded responses from major Internet CEOs means they care. This is a good thing.

retreat is the wrong reaction

I’ve heard folks argue that PRISM means we need to bet it all on end-to-end encryption. I think that’s wrong, because that doesn’t fulfill users’ needs. But even putting that aside: if you believe the government is willing to penetrate professionally managed corporate servers without company permission or legal clarity, do you sincerely believe the government wouldn’t also penetrate your personal computer and steal the data before you encrypt it?

Services and data aggregation play a critical role in providing users the features they need to share, discover, and grow. They’re not going away. Don’t expect PRISM to herald the era of end-to-end encryption and dumb servers. Those will continue to play only a limited role for very specific use cases.

What we need is (1) companies that deeply respect users, and (2) legal processes that protect user data wherever it lives. I think we’re seeing the beginning of (1). Now, Obama, over to you for (2).

what happens when we forget who should own the data: PRISM

Heard about PRISM? Supposedly, the NSA has direct access to servers at major Internet companies. This has happened before, e.g. when Sprint provided law enforcement a simple data portal they could use at any time. They used it 8 million times in a year. That said, the scale of this new claim is a bit staggering. If the NSA has access to these 9 companies’ data, it has access to every American Citizen’s complete life.

what’s really happening?

I think we don’t know yet what’s happening.

I’m dubious that NSA has direct access to servers at Google, Facebook, Apple, etc. Those companies have strongly denied the claim, and I have trouble believing this happened on a large scale for years without someone at those companies leaking the information.

Might NSA be tapping all network traffic? Yeah, that’s probable. Might NSA have the facility to decrypt the encrypted traffic? For targeted searches, yeah, I believe that. For broad-scale searching across all traffic? I’m not so sure. It could be happening, but that would be tremendous, hard-to-fathom news.

I could be wrong here. Companies might be cooperating and lying about it. NSA might be eons ahead of what we expect in terms of computing capability and cryptographic breakthroughs. This is just my gut instinct.

is this okay?

So, let’s assume it is happening. Is it okay? Hell no it isn’t. There is no doubt in my mind that user data, whether stored in a lockbox in my home or on a server in Oregon, should first and foremost belong to me, and be covered by the same Constitutional protections as my home and private belongings. It is high time for the law to catch up, for a digital due process. Blanket surveillance, warrantless private data capture or seizure, are unacceptable, and should be revolting to anyone who cares about freedom and democracy.

lessons for technologists

I deeply believe that one should first look at one’s own actions before blaming others. And I think we, technologists, have some blame to shoulder.

We’ve let our guard down when it comes to user data ownership. We’ve made it increasingly acceptable to collect user data and make decisions about how best to use it without involving the user much. We’ve often allowed the definition of “using data for the user’s benefit” to loosen.

In other words, where user data ownership in the cloud was murky to begin with, we’ve made it murkier.

Unlike some of my colleagues, I don’t believe we can simply forgo the Cloud or use end-to-end encryption. Encryption cannot be layered on without consequences. You cannot provide the value that users want without some centralization of data and services.

But we can take a stronger stance against companies that abuse users’ trust and treat the data as their own rather than the user’s. We can set an example. We can state clearly that when we collect data, we do it with care, we do it for a clear purpose, and we allow the user to leave as easily as possible, removing traces of their data as best we can.

We can set the example that the user’s data, whatever server it’s on, belongs, by principle, to the user. And then we can and should ask our government to live up to the same standard.

getting web sites to adopt a new identity system

My team at Mozilla works on Persona, an easy and secure web login solution. Persona delivers to web sites and apps just the right information for a meaningful login: an email address of the user’s choice. Persona is one of Mozilla’s first forays “up the stack” into web services.

Typically, at Mozilla, we improve the Web by way of Firefox, our major lever with hundreds of millions of users. Take asm.js, Firefox’s new awesome JavaScript optimization technology which lets you run 60-frame-per-seconds games in your web browser. It’s such a great thing that Chrome is fast-following. Of course, Chrome also innovates by deploying features first, and Firefox often fast-follows. Standardization ensues. The Web wins.

With Identity, we’ve taken a different approach: out of the gate, Persona works on all modern browsers, desktop and mobile, and some not-so-modern browsers like IE8 and Android 2.2 stock. We’re not simply building open specs for others to build against: we are putting in the time and effort to make Persona work everywhere. We even have iOS and Android native SDKs in the works.

Why would we do such a thing? Aren’t we helping to improve our competitors’ platforms instead of improving our own? That reasoning, though tempting, is misguided. Here’s why.

working on all modern platforms is table-stakes

We talk about Persona to Web developers all the time. We almost always get the following two questions:

  1. does this work in other browsers?
  2. does this work on mobile?

These questions are actually all-or-nothing: either Persona works on other browsers and on mobile, or, developers tell us, they won’t adopt it. To date, we have not found a single web site that would deploy a Firefox-only authentication system. Some web sites have adopted Persona, only to back out once they built an iOS app and couldn’t use Persona effectively (we’re actively fixing that.) So, grand theories aside, we’re targeting all platforms because web sites simply won’t adopt Persona otherwise. After all, Facebook Connect works everywhere.

When you think about it, is that actually so different from the asm.js strategy? asm.js is much faster on Firefox, but it works on Chrome and any other JavaScript engine, too. Heck, even Google’s DART, a brand new language they want to see browsers adopt, comes with a DART-to-JavaScript-compiler so it works on all other browsers out of the gate. These are not after-thoughts. These are not small investments. asm.js was designed as a proper subset of JavaScript. The DART-to-JS compiler is a freaking compiler, built just so non-Chrome browsers can run DART.

When appealing to web developers to make a significant investment — rewriting code, building against a new authentication system, .. —, cross-browser and cross-device functionality from day 1 is table-stakes. The alternative is not reduced adoption, it’s zero adoption.

priming users is the winning hand

The similarities between Identity and purely functional improvements like asm.js stop when it comes to users. The reason web sites choose Facebook Connect is not just because it works, but because 1 Billion users are primed with accounts and ready to log in. Same goes for Google+ and Twitter logins.

Persona doesn’t have a gigantic userbase to start from. That sucks. The good news is that, unlike other identity systems, we don’t want to create a huge silo’ed userbase. What we want is a protocol and a user-experience that make Web logins better. We want users to choose their identity. We’re happy to bridge to existing userbases to help them do just that!

So, bridging is what we’re doing. You’ve seen it already with Yahoo Identity Bridging in Persona Beta 2. More is coming. With each bridge, hundreds of millions of additional users are primed to log in with Persona. That’s powerful. And it’s a major reason why sites are adopting Persona.

Working everywhere is table-stakes. Priming users so they’re ready to log in with just a couple of clicks, that’s the winning hand.

beautiful native user-agent implementations sweetens the pot

Meanwhile, the Persona protocol is specifically tailored to be mediated by the user’s browser. Long-term, we think this will be a fantastic asset for the Persona login experience. Beautiful, device-specific UIs. Universal logout buttons. Innovation in trusted UIs. And lots of other tricks we haven’t even thought of yet. We’re doing just that kind of innovation on Firefox OS with a built-in trusted UI for Persona.

But let’s be clear: that’s not an adoption strategy. An optimized Firefox UI for Persona will not affect web-site adoption because it does nothing to reduce login friction. In a while, once Persona is widespread with hundreds of thousands of web sites supporting it, and users are actively logging in with Persona on many devices and browsers, Firefox’s optimized Persona UI will be a competitive advantage that other browsers will feel pressure to match. Until then, web site adoption is the only thing that matters.

now you know our priorities

Wherever it makes sense, we’re implementing Firefox-specific Persona UIs. However, when it comes to an adoption strategy, we know from our customers that this won’t help. What will help is:

  1. Persona working everywhere
  2. As many users as possible primed to log in

Those are our priorities.

We know this is different for Mozilla. But it’s quite common for folks implementing Services. What you’re seeing here is Mozilla adapting as it applies its strongly held principle of user sovereignty up the stack and into the cloud.