US government agencies appear to be engaged in large-scale Internet surveillance, using secret court orders to force major Internet companies to provide assistance. The extent of this assistance is a topic of debate. What’s clear, though, is that the process itself is opaque: it’s impossible to know how broad or inappropriate the surveillance may be.
OK, so what do we do about it?
told you so, never shoulda trusted the Cloud
Some folks see this as vindication: we never should have trusted the Cloud. Only trust yourself, generate your own keypairs, encrypt all traffic, host your own email, etc. Servers are evil and should be considered leaky stupid passthroughs for fully encrypted data.
First, this is naive. If government agencies believe they have the authority to monitor all Internet traffic, would they hesitate to create viruses that infect and monitor endpoints? Would they hesitate to force software and hardware vendors to build secret backdoors into their products? It is the engineer’s mistake to believe that Law Enforcement will stop cleanly at technical abstraction layers. If the goal is total surveillance, the financial means immense, the arm-twisting strength unlimited, the oversight inexistent.. what would you do in their position?
Second, if, like me, you agree that technology experts have a duty to build solutions that matter to laypeople, it’s also irresponsible. None of these paranoid solutions are accessible to laypeople. Can you imagine Grandpa with his fingerprint-activated USB-key holding his RSA-2048-bit secret key and surfing the Web via Tor proclaiming “not me, I will fight the man!” Yeah. (And if you’re thinking “no Grandpa, not RSA! Elliptic curves!” well, thank you for making my point for me.)
So enough with this la-la land of users as fortified islands communicating via torpedo-proof-ciphertext-carrying submarines. People engage with others by way of intermediaries they trust, for that is the basis of all human interaction and commerce since the dawn of time. Let us build systems, both technical and legal, that start there.
protect user data wherever it lives
We can build systems that start with respect for the user and her data, wherever it lives. On Facebook servers, on Google servers, on self-hosted servers, on private computers. Encrypted or not encrypted. We can and should use cryptography to secure channels from those who would disrespect user data, reduce data collection to that which is useful, and generally build defense in depth against bad actors. We should stop wasting time on systems that impose the resulting complexity on users. Government access to user data should follow a clear, transparent process that is consistent wherever the data happens to be stored, however it happens to be encrypted.
Let’s build that system together. Not by barricading ourselves on our lonely islands of encryption and onion-routing. But by building the legal and technical framework we need to respect users and their data. Mozilla and Google have started. I’m hopeful many more will join.
17 responses to “no user is an island”
Also https://newsroom.fb.com/Fact-Check (FB is my employer)
Ben, I could not agree more. Why are these not showing up on Planet? Please try to get them on there, there’s way too much FUD coming out about Prism, we could use more public stabilizing influences.
[…] Additional Perspectives from Mozilla: Mitchell Baker’s post, “Total Surveillance” Ben Adida’s post, “No User is an Island” […]
I’m interested to know your thoughts on a possible next stage of this. Assume that either we fail to produce good law, or can assume that there is a layer circumventing the law that we know about. That is, you have *only* a technical framework within which to solve this problem; you cannot rely on government actors to behave in any way that you want them to. We can call this “the present day”, I suppose. Do you think that there’s no pure-technical solution, or no tech+education solution? Or do you reject the premise that there’s something to ‘solve’, either due to a certain view on privacy, or a belief that all acceptable solutions have a legal component?
Oh, and I agree that most crypto fantasies fall down hard when faced with the realities of human social structures.
Ben’s post is on Planet! That’s how I found it 🙂
Richard: great question.
I do think there is something to solve: users should control their data and, short of transparent due process, that includes protecting it from the government.
I don’t think there is a purely technical solution to this problem for lay users. Expert users may be able to defend themselves if they’re particularly careful… but I don’t think that’s relevant to the conversation since I think we need to be thinking about most users.
Chess has “rules”. If your opponent plays by those rules, great!
If not, then your opponent is not playing chess.
My colon surgeon did not play by the “rules” by not telling me
and getting my written permission before directly and intentionally
incapacitating my memory for approximately one half hour.
My cousin had an allergy testing nurse not play by the “rules”
by not reading my cousins’ chart to check for alcohol allergy
before wiping my cousins’ arm with an alcohol wipe!
So, “laws” are not much use if the sociological equipment (people)
they “run on” is malfunctioning by not following them.
That’s a fair point, though I’m not sure how much of a difference
that part can make in this particular discussion. Do you think there’s a
big win in the surveillance game here? Certainly on the transport
I think it deflates your “naive” point a bit: “If government agencies believe they have the authority to monitor all
Internet traffic, would they hesitate to create viruses that infect and
monitor endpoints? Would they hesitate to force software and hardware
vendors to build secret backdoors into their products?” The answer is yes, they could do those things, but those kinds of attacks are less threatening because they don’t scale so easily and are more detectable and preventable.
Reading your posts on “encryption is not magic gravy”, I sense you’re a bit burned out by the Sync key management issue (and you’re not alone!). But maybe that’s because you’ve been focused on the most challenging part of the problem: provisioning users with truly secret keys (modulo dictionary attacks against passwords), and secure authentication. Once we can rely on PICL and Persona to solve those, it seems to me it’s relatively easy to rearchitect a lot of applications to distrust servers, with no additional UX overhead.
Robert: maybe I didn’t make the point clearly enough in that blog post 🙂 It’s not that I’m burnt out, it’s that key management done by users is simply not a viable solution, in my opinion. PICL will help by giving you a password-derived key, so if you remember your password you’re fine. But if you don’t, you will lose some data. So we’re discussing which data is okay to lose if you both forget your password and lose all your devices. Some data may not be okay to lose, and in that case we would store it in a recoverable way in PICL.
In other words, if we want certain features like recoverability, we have to trust servers.
Sure, they are more detectable. But in practice, I don’t think they are sufficiently more detectable to make a big difference. I also don’t think they are more preventable. 0days are on the market constantly.
At the core of your argument is the idea that we *can* make this better with technology, and I strongly disagree with it. Most features users rely on today require trusting servers.
I just don’t see the possibility of a government like the US government deploying 0days against millions of users. That would be an incredibly risky strategy compared to harvesting data from servers.
When you say “most features users rely on today”, what services are you thinking of?
I guess I don’t think it would be nearly that risky. Given how many aggressive viruses are out there on a regular basis, hiding among them is not that terribly hard.
As for services users rely on. Google email filtering. Google Calendar. Facebook sharing. And on and on. None of these can be done without trusting servers.
I think Google Calendar and Facebook sharing can be done. Spam filtering’s a hard one :-). I’ll give it some thought; I don’t want to waste any more of your time.
This snooping has been going on a lot longer than 9/11. The US government has long opposed export of effective technology to keep them from snooping. They placed strong cryptographic technology on the munitions export list regulating it the same as a weapon of war.
I remember the government persecution of privacy advocate Phil Zimmerman in the early ’90s. He had the temerity to publish the cryptographic source code for PGP. As source code, it would be hard to force PGP to engineer in back doors like they apparently did with Microsoft (http://www.cnn.com/TECH/computing/9909/03/windows.nsa.02/). Remember, this was all pre-9/11.