A few days ago, a security bug was discovered on Facebook, whereby users could see the chat transcripts of their friends talking to other friends. Then, another security hole was discovered where a problem at Yelp revealed email addresses of Facebook users. And today, Google realized that they accidentally collected network traffic from open wi-fi connections while gathering street-view data.
In every instance, the companies involved didn’t mean to cause these data breaches. In every instance, they would gladly pay serious cash to prevent these bugs, given the negative publicity they cause. In every instance, most security folks I know are unfazed by these news, which they find quite unsurprising. And in each instance, the companies in question, Facebook and Google, reacted admirably: rapid disclosure, rapid response.
If you’re shocked, outraged, writing lengthy TechCrunch posts about these developments, you probably don’t understand computer security very well, and you’d better sit down with a stiff drink, because these issues are the tip of the iceberg. Accidental breaches happen all the time. Writing secure software is incredibly difficult, especially when, like Google and Facebook, you’re pushing the envelope and releasing features as quickly as possible to outpace your competitors.
We do not know how to write secure software. We do not know how to ensure that the software we write follows a given set of policies. Anyone who tries to sell you a perfectly secure system is either lying to you or doesn’t know what he’s talking about.
There are measures we can take to minimize risk, but they aren’t very sexy. Mistakes will be made, so the point is to minimize the bad consequences of those inevitable mistakes. Think of the awfully boring things you’ve heard security geeks tell you:
- don’t store user passwords in the clear.
- don’t make up your own crypto protocols, be conservative.
- defend in depth: even if you’ve got one defense, it’s always a good idea to have other defenses against the same potential breach.
- look upon large data aggregations with deep skepticism: Google, Facebook, government DNA databases, etc.
- look upon wide-ranging integration of disparate systems with deep skepticism: OpenID, Facebook Instant Personalization, etc.
- be skeptical of software vetting claims, i.e. the Apple app store.
- ask that new protocols do much more than be “no less secure” than existing protocols (I’m looking at you oAuth 2.0, no less secure than cookies).
- demand that the public be informed of all significant data breaches.
So freaking boring, right? But that’s real security. Minimize the target, minimize the risk, maximize the defenses, and always seek to strengthen security measures, because attacks only get stronger, too. And finally, provide the incentive to companies to constantly improve their defenses, by mandating disclosure.
If you’re shocked about the Facebook and Google bugs, just remember that those are the bugs we know about. There are a whole bunch more we don’t know about that Google and Facebook caught before any harm occurred. And there are a whole bunch more that nobody but bad guys know about that are actively causing harm right this minute. So the question is, as a software engineer, did you take those boring precautions to limit the damage? As a user, did you consider what data you’re storing with whom, how big a target they may be to attackers, and how motivated they are to truly secure your data?