when selfish acts become altruistic

My first open-source contribution was in 1998, when a ragtag bunch of web hackers and I published the first version of one of the first web application toolkits. In 2000, after I’d left the original project, a few other hackers and I “forked” that codebase to make it work on an open-source database, meaning we took the code, copied it to a different repository, and took it down a different path than that envisioned by its maintainers.

That has always been the beauty of open-source: if you don’t like the direction of the software, you can always fork it. The dirty little secret of open-source at the time was that this was much more an abstract threat than a common occurrence. Forking was almost always a bad word, a huge undertaking, and it happened very rarely. One central reason for this was that forking was almost always a one-way street: once you forked, it became very difficult to share improvements between forks.

So, in theory, thanks to open-source licenses, forking was always possible. In practice, it was a huge undertaking, one with deep consequences, and usually a sign of something gone awry.

Then in 2008 came Github. Github made hosting source code super easy. Github made forking other people’s code super easy, too. And thanks to the underlying Git system, Github made merging just as easy as forking. Within a few weeks, more than 6200 projects on Github had been forked at least once.

Forking became the optimistic way to contribute to a project: fork it, modify it for your own use, and suggest the change back to the original author. If the original author accepts your change, great, you can forget about your fork and go back to using the original code. If the author rejects your change, you can keep using your forked version, occasionally merging in upstream changes from the original author.

So forking became a good thing, a sign of interest in your project. People wore “Fork Me” t-shirts. And it was all done for years with little attention paid to the specifics of the licenses underlying it all. It was just assumed that, if you made a Github project public, you allowed open-source style forking.

In many ways, Github made real what open-source licenses mostly theorized. Standing on the shoulders of giants, contributing just a little tweak very easily, taking a different direction when you need to, etc. All the beauty of a vast open repository of code you can pick from and contribute to exists in Github.

And somehow, this amazing sharing ecosystem is based on purely selfish incentives. I need to host my code. I don’t like paying for things unless I need to, so I’ll make it public, because Github makes me pay for private repositories. I sometimes need to change other people’s code, and there’s a button for that. If someone changes my code I’d like to benefit from it, and there’s a button for that, too.

Like the Back-to-the-Future deLorean that runs on garbage, Github produces sharing behavior by consuming selfish acts.

I’d like to see many other Githubs. And I know startups are pitching Github-like projects to VCs daily. But it’s not just about a place to host and remix stuff. The magic of Github is that it generated a sharing ecosystem out of selfish incentives. Not sharing and selfishness side by side. Not questionable sharing of private content for the sake of virality. Sharing as a true side-effect of selfish behavior.

That’s not easy. And if it can be done in fields other than source code… I really like what that could do to benefit human knowledge and progress.

the French like their strikes like Americans like their guns

This week, French taxis went on strike because the government passed a law that made Uber and other modern chauffeur equivalents artificially less competitive… but apparently not sufficiently less competitive, and that was a tragedy that only a massive strike could rectify. Then when people jumped into Uber cars because, hey, there were no cabs, those cars were attacked, leaving some passengers bleeding and stranded on the side of the road.

If you go read the French press, these assaults on completely innocent people are footnotes. “Incidents.” “Scuffles.” It’s enough to make your blood boil, really, that no one other than Uber executives seems to be particularly offended.

And this is typical, really. Strikes in France are often launched over ludicrous demands, and they’re incredibly disruptive if not downright dangerous. Many people in France will tell you how much they hate the incredibly powerful unions and the strikes they engender. But that’s just how it is. Because strikes are, to many, the essence of French rights, the core of what made French society, at least in the past, an exemplar of workers’ rights against the oppressive corporations.

Meanwhile, in the same week, a man got shot in a Florida movie theater, apparently because he was texting and that got someone really annoyed. The press wrote “man killed over texting in a movie theater,” and the discussion was often about how annoying texting can be. Because guns don’t kill people. Texting in a movie theater… now that kills!

Never mind that in the year since the Sandy Hook school shooting, where more than a dozen 6-year-olds were shot (6 year-olds! come on!), we’ve done exactly nothing as a country to contain gun violence. Stupid fights escalate into shootings. Because Second Amendment! I’m sure that’s what the Founders had in mind when they wanted a “well-regulated militia”: people in movie theaters with guns to settle fights.

Guns are such a deep part of America’s identity that their inherent goodness cannot be challenged. Even if many Americans wish they could change things. It doesn’t happen. It’s too engrained in American culture.

Yes, yes, I know, these two things are not quite the same.

But in a really critical way, they are. We humans make stupid decisions, and I mean really stupid, because some things feel, on principle, like deep parts of our identity. Because at one point in the past, in theory, that thing was really, really important. It’s the insane thing you hold on to because, if you give it up, it feels like you’re giving up a piece of yourself, like you’re renouncing who you really are.

So. What’s your stupid cause you feel you must stick to lest you betray yourself? How is it stopping you from seeing the obvious mistake you’re making? Can you let go of it and accept that yes, you are still the same person? I ask myself that, every now and then.

Because we primates sure are irrational.

there are 3 kinds of crypto

When we use terminology that is too broad, too coarse-grained, we make discussion more difficult. That sounds obvious, but it’s easy to miss in practice. We’ve made this mistake in spades with crypto. Discussing the field as one broad topic is counter-productive and leads to needless bickering.

I see 3 major kinds of crypto: b2c crypto, b2b crypto, and p2p crypto. I suggest that we use this terminology consistently to help guide the discussion. We’ll spend less time talking about differences in our assumptions, and more time building better solutions.

b2c crypto

Business-to-Customer Crypto (b2c) is used to secure the relationship between an organization and a typical user. The user roughly trusts the organization, and the goal of b2c crypto is to enable that trust by keeping attackers out of that relationship. Both the organization and the user want to know that they’re talking to each other and not to an impostor. The organization is usually acting like an honest-but-curious party: they’ll mostly do what they promise. The b2c-crypto relationship is common between Internet service providers (in the broad sense, including Google, Amazon, etc.) and typical Internet users, as well as between employees and their employer’s IT department.

Web-browser SSL is a great example of b2c crypto. Users start with a computer that has at least one web browser with a set of root certs. Users can continue using that browser or download, over SSL secured by those initial root certs, another browser they trust more. Users then trust their preferred browser’s security indicators when they shop on Amazon or read their Gmail.

A critical feature of b2c crypto is that users don’t ever manage crypto keys. At best they manage a password, and even then they’re generally able to reset it. Users make trust decisions based on brands and hopefully clear security indicators: I want a Mac, I want to use Firefox, and I want to shop on Amazon but only when I see the green lock icon.

b2b crypto

Business-to-Business (b2b) crypto is used to secure the relationship between organizations, two or more at a time. There are two defining characteristics of b2b crypto:

  • all participants are expected to manage crypto keys
  • end-users are generally not involved or burdened

DKIM is a good example of b2b crypto. Organizations sign their outgoing emails and verify signatures on incoming emails. Spam and phishing are reduced, and end-users see only the positive result without being involved in the process. Organizations must maintain secret cryptographic keys for signing those emails and know how to publish their public keys (usually in DNS) to inform other organizations.

OAuth qualifies as b2b crypto. Consumers and Producers of Web APIs establish shared secret credentials and use them to secure API calls between organizations.

Another good example is SSL certificate issuance. Both the web site seeking a certificate and the Certificate Authority have to generate and secure secret keys. The complexity of the certification process is mostly hidden from end-users.

p2p crypto

Peer-to-Peer (p2p) crypto is used to secure communication between two or more crypto-savvy individuals. The defining characteristic of p2p crypto is that the crypto-savvy individuals trust no one by default. They tend to run code locally, manage crypto keys, and assume all intermediaries are active attackers.

PGP is a great example of p2p crypto. Everyone generates a keypair, and by default no one trusts anyone else. Emails are encrypted and signed, and if you lose your secret key, you’re out of luck.

so how does this help?

This naming scheme provides a clear shorthand for delineating crypto solutions. Is your wonderful crypto solution targeted at the general public? Then it’s probably a combination of b2c crypto for users and b2b crypto for organizations that support them. Are you building a specialized communications platform for journalists in war zones? Then it’s probably p2p crypto.

The implementation techniques we use for various kinds of crypto differ. So when some folks write that Javascript Crypto is considered harmful, I can easily respond “yes, dynamically-loaded Javascript is a poor approach for p2p crypto, but it’s great for b2c crypto.” In fact, when you look closely at a similar criticism of Javascript crypto from Tony Arcieri, you see this same differentiation, only with much more verbiage because we don’t have clear terminology:

Before I keep talking about where in-browser cryptography is inappropriate, let me talk about where I think it might work: I think it has great potential uses for encrypting messages sent between a user and the web site they are accessing. For example, my former employer LivingSocial used in-browser crypto to encrypt credit card numbers in-browser with their payment processor’s public key before sending them over the wire (via an HTTPS connection which effectively double-encrypted them). This provided end-to-end encryption between a user’s browser and the LivingSocial’s upstream payment gateway, even after HTTPS has been terminated by LivingSocial (i.e. all cardholder data seen by LivingSocial was encrypted).

This is a very good thing. It’s the kind of defense that can prevent the likes of the attack against Target’s 40M customers last month. And that’s exactly the point of b2c crypto.

most users can’t manage crypto keys

I use the term p2p crypto because I like to think of it as “Pro-to-Pro.” Expecting typical Internet users to engage in p2p crypto is, in my opinion, a pipe dream: typical users can’t manage secret crypto keys, so typical users must rely on organizations to do that for them. That’s why successful general-public crypto is a combination of b2c crypto between individuals and the organizations they choose to trust, and b2b crypto across organizations. More expertise and care is expected of the organizations, little is expected of individual users, and some trust is assumed between a user and the organizations they choose.

You don’t have to agree with me on this point to agree with the nomenclature. If you’re interested in protocols where individuals manage their own secret keys and don’t trust intermediaries, you’re interested in p2p crypto. I happen to think that p2p crypto is applicable only to some users and some specific situations.

on cooking turkey and solving problems

On Thursday, my wife and I hosted our 10th Thanksgiving. We both enjoy cooking and baking, though we remain clearly amateurs and tend to make it up as we go along. There was that one time we realized, the night before Thanksgiving, that a frozen 15-pound turkey requires 3 days to defrost in the fridge. I stayed up most of the night, soaking the bird in the bathtub.

We’ve gotten better over time: she focuses on stuffing and cranberry sauce, me on turkey and dessert, and we collaborate on some kind of sweet potato dish. The stress almost always comes around how long to roast the turkey and whether it’s fully cooked. For the last 4 years, we’ve had the added (but wonderful) complexity of little kids eager to eat. We had great luck once with a high-heat plus start-breast-side-down combination, but we were never able to recreate that success.

This year, I reached out on twitter:

I received this recommendation from a former student and fellow web hacker:

“What in the world is spatchcocking,” I thought. I was ready to try it after this video:

(The New York Times also has a nice take on the technique.)

Roughly, spatchcocking involves cutting out the turkey’s backbone, then breaking open the bird and laying it out flat. One layer of meat, all on the same plane, with the dark meat slightly protecting the white meat, which is what you want since white meat cooks faster. The technique promises shorter cook times, more even cooking, and juicier meat.

Turns out, it’s all absolutely true. Preparation was easy and eminently repeatable, with little risk of screwing things up. The bird cooked in about 2 hours, where typically it would have required 4. The whole turkey cooked at the same speed. The result: amazing fully cooked dark meat, juicy white meat, perfectly crispy skin, and plenty of oven time left for an apple pie and stuffing. Everyone at the table agreed: this was the best turkey I’ve ever cooked by far. Even the little kids ate triple portions.

So what’s the downside? Well, people claim there are two: (a) you can’t stuff the turkey and (b) you can’t present a typical, whole roasted turkey. Instead you’ve got a weird flat thing that indicates you got really angry in the kitchen. Neither of these matters to me, and I’ll go out on a limb and say they should matter little to most people: stuffing a raw turkey significantly increases the risk of food poisoning, and, as it turns out, being forced to carve the turkey before presenting it made serving the meal much easier.

So, first lesson: I will only cook spatchcock turkeys from now on.

And second lesson: even after 10 years of doing something, it’s still possible to find a solution that is faster, simpler, and better, with no real downsides. What’s crazy is that the solution is already out there, used by some, just not widely. Crazier still is that many people know about it, they just refuse to try it because there are “downsides” or the solution is unusual.

But what if the downsides are rhetorical at best? What if it’s really all upside?

I can’t help but link this to software engineering and problem-solving more broadly. There are so many technical solutions we simply accept as necessary and necessarily hard. We fail to search for simpler solutions, even when they already exist. Or if we know about them, we choose to ignore them because they seem too simple, too good to be true. We make up excuses, we make up theoretical downsides.

Why not stick to simple? There’s not necessarily a real tradeoff. Sometimes, even often, you can do faster, simpler, and better. I’m going to work to keep that in mind in everything I do. Kindergarden selection for my kids. Financial planning. And especially software.

Before going complicated, have you tried spatchcocking? The result might just be delicious.

Letter to President Obama on Surveillance and Freedom

Dear President Obama,

My name is Ben Adida. I am 36, married, two kids, working in Silicon Valley as a software engineer with a strong background in security. I’ve worked on the security of voting systems and health systems, on web browsers and payment systems. I enthusiastically voted for you three times: in the 2008 primary and in both presidential elections. When I wrote about my support for your campaign five years ago, I said:

In his campaign, Obama has proposed opening up to the public all bill debates and negotiations with lobbyists, via TV and the Internet. Why? Because he trusts that Americans, when given the tools to see and understand what their legislators are doing, will apply pressure to keep their government honest.

I gushed about how you supported transparency as broadly as possible, to enable better decision making, to empower individuals, and to build a better nation.

Now, I’m no stubborn idealist. I know that change is hard and slow. I know you cannot steer a ship as big as the United States as quickly as some would like. I know tough compromises are the inevitable path to progress.

I also imagine that, once you’re President, the enormity of the threat from those who would attack Americans must be overwhelming. The responsibility you feel, the level of detail you understand, must make prior principles sometimes feel quaint. I cannot imagine what it’s like to be in your shoes.

I also remember that you called on us, your supporters, to stay active, to call you and Congress to task. I want to believe that you asked for this because you knew that your perspective as Commander in Chief would inevitably become skewed. So this is what I’m doing here: I’m calling you to task.

You are failing hard on transparency and oversight when it comes to NSA surveillance. This failure is not the pragmatic compromise of Obamacare, which I strongly support. It is not the sheer difficulty of closing Guantanamo, which I understand. This failure is deep. If you fail to fix it, you will be the President principally responsible for the effective death of the Fourth Amendment and worse.

mass surveillance

The specific topic of concern, to be clear, is mass surveillance. I am not concerned with targeted data requests, based on probable cause and reviewed individually by publicly accountable judges. I can even live with secret data requests, provided they’re very limited, finely targeted, and protect the free-speech rights of service providers like Google and Facebook to release appropriately sanitized data about these requests as often as they’d like.

What I’m concerned about is the broad, dragnet NSA signals intelligence recently revealed by Edward Snowden. This kind of surveillance is a different beast, comparable to routine frisking of every individual simply for walking down the street. It is repulsive to me. It should be repulsive to you, too.

wrong in practice

If you’re a hypochondriac, you might be tempted to ask your doctor for a full body MRI or CT scan to catch health issues before detectable symptoms. Unfortunately, because of two simple probabilistic principles, you’re much worse off if you get the test.

First, it is relatively unlikely that a random person with no symptoms has a serious medical problem, ie the prior probability is low. Second, it is quite possible — not likely, but possible — that a completely benign thing appears potentially dangerous on imaging, ie there is a noticeable chance of false positive. Put those two things together, and you get this mind-bending outcome: if the full-body MRI says you have something to worry about, you actually don’t have anything to worry about. But try convincing yourself of that if you get a scary MRI result.

Mass surveillance to seek out terrorism is basically the same thing: very low prior probability that any given person is a terrorist, quite possible that normal behavior appears suspicious. Mass surveillance means wasting tremendous resources on dead ends. And because we’re human and we make mistakes when given bad data, mass surveillance sometimes means badly hurting innocent people, like Jean-Charles de Menezes.

So what happens when a massively funded effort has frustratingly poor outcomes? You get scope creep: the surveillance apparatus gets redirected to other purposes. The TSA starts overseeing sporting events. The DEA and IRS dip into the NSA dataset. Anti-terrorism laws with far-reaching powers are used to intimidate journalists and their loved ones.

Where does it stop? If we forgo due process for a certain category of investigation which, by design, will see its scope broaden to just about any type of investigation, is there any due process left?

wrong on principle

I can imagine some people, maybe some of your trusted advisors, will say that what I’ve just described is simply a “poor implementation” of surveillance, that the NSA does a much better job. So it’s worth asking: assuming we can perfect a surveillance system with zero false positives, is it then okay to live in a society that implements such surveillance and detects any illegal act?

This has always felt wrong to me, but I couldn’t express a simple, principled, ethical reason for this feeling, until I spoke with a colleague recently who said it better than I ever could:

For society to progress, individuals must be able to experiment very close to the limit of the law and sometimes cross into illegality. A society which perfectly enforces its laws is one that cannot make progress.

What would have become of the civil rights movement if all of its initial transgressions had been perfectly detected and punished? What about gay rights? Women’s rights? Is there even room for civil disobedience?

Though we want our laws to reflect morality, they are, at best, a very rough and sometimes completely broken approximation of morality. Our ability as citizens to occasionally transgress the law is the force that brings our society’s laws closer to our moral ideals. We should reject mass surveillance, even the theoretically perfect kind, with all the strength and fury of a people striving to form a more perfect union.


Mr. President, you have said that you do not consider Edward Snowden a patriot, and you have not commented on whether he is a whistleblower. I ask you to consider this: if you were an ordinary citizen, living your life as a Law Professor at the University of Chicago, and you found out, through Edward Snowden’s revelations, the scope of the NSA mass surveillance program and the misuse of the accumulated data by the DEA and the IRS, what would you think? Wouldn’t you, like many of us, be thankful that Mr. Snowden risked his life to give we the people this information, so that we may judge for ourselves whether this is the society we want?

And if there is even a possibility that you would feel this way, given that many thousands do, if government insiders believe Snowden to be a traitor while outsiders believe him to be a whisteblower, is that not all the information you need to realize the critical positive role he has played, and the need for the government to change?

the time to do something is now

I still believe that you are, at your core, a unique President who values a government by and for the people. As a continuing supporter of your Presidency, I implore you to look deeply at this issue, to bring in outside experts who are not involved in national security. This issue is critical to our future as a free nation.

Please do what is right so that your daughters and my sons can grow up with the privacy and dignity they deserve, free from surveillance, its inevitable abuses, and its paralyzing force. Our kids, too, will have civil rights battles to fight. They, too, will need the ability to challenge unjust laws. They, too, will need the space to make our country better still.

Please do not rob them of that opportunity.


Ben Adida

security is hard, let’s improve the conversation

A few days ago, a number of folks were up in arms over the fact that you can see saved passwords in your Google Chrome settings. Separately, a few folks got really upset about how Firefox no longer provide a user interface for disabling JavaScript. These flare-ups make me sad, because the conversations are often deeply disrespectful, with a tone implying that there was obvious negligence or stupidity involved. There’s too little subtlety in the discussion, not enough respectful exchange.

Security is hard. I don’t mean that you have to work really hard to do the right thing, I mean that “the right thing” is far from obvious. What are you defending against? Does your solution provide increased security in a real-world setting, not just in theory? Have you factored in usability? Is it security theater? And is security theater necessarily a bad thing?

These are subtle discussions. Let’s discuss openly and respectfully. Let’s ask questions, understand threat model differences, and contribute to improving security for real. In particular, let’s take into account typical user behavior, which can easily negate the very best security in favor of convenience.

Let’s talk examples.

writing your passwords down

Recently, I had to create a brand new complicated password. I pulled out a sheet of paper, thought of a password, wrote it down, and put the piece of paper in my wallet. Someone said to me “did you just write that password down?” I said yes. The snarky response came back: “you should never write passwords down.” Maybe you’ve said this yourself, to a relative, friend, or co-worker?

Except it’s not that simple. Bruce Schneier recommends writing down your passwords so you’re not tempted to use one that’s too simple in order to remember it. Oftentimes, you should be more worried about the remote network attacker than people who have physical access to your machine.

But don’t feel bad about it. You’re not stupid for telling your poor aging parents to pick long impossible-to-remember passwords and then never write them down. That’s what many experts said for years. This stuff is hard. It’s worth discussing, exploring, and finding the appropriate balance of security and convenience for the application at hand. The answer won’t be the same for everyone and everything.

Google Chrome passwords

Yes, it’s true, you can, in a few seconds, view in cleartext all the passwords saved within a Google Chrome browser. But did you know you can do it in Firefox and Safari, too? With just about the same number of clicks? Are you having second thoughts about your immediate gut reaction of pure disgust at Chrome’s apparent sloppiness?

There are good reasons why you might legitimately want to read your passwords out of your browser. Most of the time, if you give your computer to someone you don’t trust, you’re kind of screwed anyways. But it’s subtle. It’s not quite the same thing to have access to your computer for a few minutes and to actually have your password. In the first case, someone can mess with your Facebook profile for a few seconds. In the second, they can get your password and log in as you on a different machine, wreaking havoc on your life for an extended period of time. So maybe it’s worth a discussion, maybe you can’t play security reductionism. Maybe the UI to view your passwords shouldn’t exist.

Would that then be security theater, since, as Adrienne Felt points out, you can install an extension that opens up a bunch of tabs and lets the password manager auto-fill them all, then steals the actual passwords? Maybe. It’s worth a discussion. In fact I like the discussion Adrienne, Joe, and I are having: it’s respectful and balanced, though limited by Twitter.

Is this fixed by Firefox’s Master Password? Sort of, if you believe that addressing the problem for a tiny percentage of the population is a “solution,” and if you assume those users will know to quit their browser every time they leave their computer unattended. Still, it’s worth pointing out the Master Password solution and evaluating its real-world efficacy.

Disabling Javascript in Firefox

As of version 23, Firefox has removed the user interface that lets a user turn off Javascript, and some folks call that lame. Why is Firefox removing user choice?

OK, so let’s consider the average Web user. Do they know what “disabling Javascript” does? If they do, is it much harder for them to use an add-on like NoScript? If they don’t, what is the benefit of offering that option, knowing that too many options is always a bad thing? Some people believe Javascript is so integral to the modern Web that disabling it is as sensible as disabling images, iframes, or the audio tag. Others believe the Web should always gracefully degrade and be fully functional without Javascript.

This is a very reasonable discussion to have. The answer isn’t obvious. My opinion is that Javascript is part of the modern Web, giving users a blunt “disable Javascript” button is practically useless, and add-ons are a fine path if you want to surf the Web with one hand tied behind your back. I have no beef with anyone who disagrees with me. I do have a beef with people who call this decision obviously stupid and see only downsides.

The Web is not that simple. Security is not that simple. And people, most importantly, are not that simple.

Let’s build a better way to discuss security. Never disrespectful, always curious. That’s how we improve security for everyone.


In about a month, I’ll be starting at Square as a Tech Lead on a new project. I’m incredibly excited for a few key reasons:

  1. team: oodles of amazingly sharp people. The interview process was simply amazing, both in how much it forced me to demonstrate as an engineer and in how much I learned about the existing team. I know I’m going to learn a ton. It’s also really nice to see Square’s engineering team contributing significant open-source code.
  2. product: it’s hard to think of a more product-focused company. The Square products (Register, Wallet, Cash, Market) are amazing. The focus on user experience is central to every conversation, and it shows.
  3. mission: Square wants to make commerce easy for businesses of all sizes. This translates in particular into major opportunities for small businesses. And this, in my mind, is what technology is for: to empower the little guys.

For the first time in a long time, my job will require a bit of secrecy. That will be an interesting adjustment for me. On this blog, I’ll continue to write what I think — not what my employer thinks — about technology, policy, etc.

For now, back to vacation. Square team: see you mid August!


Today is my last day at Mozilla. It’s been an amazing ride, and I’m incredibly proud of the Identity Team and of the work we produced together, notably Persona. The team and project are now in the incredibly capable hands of my friend Lloyd Hilaiel. I expect to see continued fantastic work from this team, and I’ll miss everyone dearly. Mozilla is a special place, and I’m grateful I had the chance to experience it firsthand.

I’ll be taking a break for a few weeks. You might see me on this blog and on Twitter from time to time, and I might even tend to Helios Voting a little bit, which has gotten far too little love from me lately. But mostly, I’ll be reading, relaxing, spending time with family. I’m excited about what comes next, and I’ll talk about that more in a few days.

no user is an island

US government agencies appear to be engaged in large-scale Internet surveillance, using secret court orders to force major Internet companies to provide assistance. The extent of this assistance is a topic of debate. What’s clear, though, is that the process itself is opaque: it’s impossible to know how broad or inappropriate the surveillance may be.

OK, so what do we do about it?

told you so, never shoulda trusted the Cloud

Some folks see this as vindication: we never should have trusted the Cloud. Only trust yourself, generate your own keypairs, encrypt all traffic, host your own email, etc. Servers are evil and should be considered leaky stupid passthroughs for fully encrypted data.

First, this is naive. If government agencies believe they have the authority to monitor all Internet traffic, would they hesitate to create viruses that infect and monitor endpoints? Would they hesitate to force software and hardware vendors to build secret backdoors into their products? It is the engineer’s mistake to believe that Law Enforcement will stop cleanly at technical abstraction layers. If the goal is total surveillance, the financial means immense, the arm-twisting strength unlimited, the oversight inexistent.. what would you do in their position?

Second, if, like me, you agree that technology experts have a duty to build solutions that matter to laypeople, it’s also irresponsible. None of these paranoid solutions are accessible to laypeople. Can you imagine Grandpa with his fingerprint-activated USB-key holding his RSA-2048-bit secret key and surfing the Web via Tor proclaiming “not me, I will fight the man!” Yeah. (And if you’re thinking “no Grandpa, not RSA! Elliptic curves!” well, thank you for making my point for me.)

So enough with this la-la land of users as fortified islands communicating via torpedo-proof-ciphertext-carrying submarines. People engage with others by way of intermediaries they trust, for that is the basis of all human interaction and commerce since the dawn of time. Let us build systems, both technical and legal, that start there.

protect user data wherever it lives

We can build systems that start with respect for the user and her data, wherever it lives. On Facebook servers, on Google servers, on self-hosted servers, on private computers. Encrypted or not encrypted. We can and should use cryptography to secure channels from those who would disrespect user data, reduce data collection to that which is useful, and generally build defense in depth against bad actors. We should stop wasting time on systems that impose the resulting complexity on users. Government access to user data should follow a clear, transparent process that is consistent wherever the data happens to be stored, however it happens to be encrypted.

Let’s build that system together. Not by barricading ourselves on our lonely islands of encryption and onion-routing. But by building the legal and technical framework we need to respect users and their data. Mozilla and Google have started. I’m hopeful many more will join.

a hopeful note about PRISM

You know what? I’m feeling optimistic suddenly. Mere hours ago, all of us tech/policy geeks lost our marbles over PRISM. And in the last hour, we’ve got two of the most strongly worded surveillance rebuttals I’ve ever seen from major Internet Companies.

Here’s Google’s CEO Larry Page:

we provide user data to governments only in accordance with the law. Our legal team reviews each and every request, and frequently pushes back when requests are overly broad or don’t follow the correct process. Press reports that suggest that Google is providing open-ended access to our users’ data are false, period. Until this week’s reports, we had never heard of the broad type of order that Verizon received—an order that appears to have required them to hand over millions of users’ call records. We were very surprised to learn that such broad orders exist. Any suggestion that Google is disclosing information about our users’ Internet activity on such a scale is completely false.

And here’s Mark Zuckerberg of Facebook:

Facebook is not and has never been part of any program to give the US or any other government direct access to our servers. We have never received a blanket request or court order from any government agency asking for information or metadata in bulk, like the one Verizon reportedly received. And if we did, we would fight it aggressively. We hadn’t even heard of PRISM before yesterday.

Both companies emphasize government data requests transparency as a critical component of moving forward. I couldn’t agree more. We need to know about every legal process in place that gives government access to private user data.


Could PRISM mark a tech world epiphany that users care about privacy? I hope so. It certainly seems that major PR departments think so. 24-hour unequivocally worded responses from major Internet CEOs means they care. This is a good thing.

retreat is the wrong reaction

I’ve heard folks argue that PRISM means we need to bet it all on end-to-end encryption. I think that’s wrong, because that doesn’t fulfill users’ needs. But even putting that aside: if you believe the government is willing to penetrate professionally managed corporate servers without company permission or legal clarity, do you sincerely believe the government wouldn’t also penetrate your personal computer and steal the data before you encrypt it?

Services and data aggregation play a critical role in providing users the features they need to share, discover, and grow. They’re not going away. Don’t expect PRISM to herald the era of end-to-end encryption and dumb servers. Those will continue to play only a limited role for very specific use cases.

What we need is (1) companies that deeply respect users, and (2) legal processes that protect user data wherever it lives. I think we’re seeing the beginning of (1). Now, Obama, over to you for (2).