When thinking about security, it is tempting to determine the worst-case attacker and focus defenses against it. (Of course, by worst-case, I mean within the bounds of a reasonable threat model: the NSA is not a reasonable worst-case attacker for every problem.) A corollary to this reasoning goes something like this: well, I’ve already implemented shield X, and if an attacker can defeat shield X, then they can probably also defeat shield Y, so I don’t need to implement shield Y because it’s useless.
That’s misguided. There may be some very good reasons to implement shield Y.
Consider the utility of a safety parachute. A determined attacker trying to kill you will obviously sabotage the safety parachute just as easily as he can sabotage the primary one. So, does that mean you might as well jump without a safety parachute? Of course not. You want to take into account not just the worst-case attacker, you want to take into account your own stupidity. A safety parachute means that, if you packed your primary wrong, you can still live. Defense in depth, as it’s more commonly known in the security community, is usually not about building the 12 layers of security around the “Die Hard” vault that a skilled attacker has to vanquish, one by one. Defense in depth is the humble realization that, of all the security measures you implement, a few will fail because of your own stupidity. It’s good to have a few backups, just in case.
In my previous post regarding Twitter, two people I greatly respect, Miguel de Icaza and Keith Winstein, claimed Twitter’s requirement that desktop clients identify themselves with a consumer name and secret didn’t make any sense, because a malicious app could easily bypass that defense. This is true. But what about the apps that have bugs? The ones that start double posting when the date hits 9/02/10 because of some date parser bug combined with a fetish for the 1990s TV show? Or the ones that have a buffer overflow error that a remote attacker can trigger and start partially remote-controlling? How does a developer, or Twitter themselves if that developer isn’t responding, shut down that faulty pipe? The consumer name suddenly looks like a decent solution to this problem. It doesn’t work against the worst-case attacker, but it works against failure modes where the desktop app developer isn’t malicious, only a little stupid.
(I say this with all the love in the world for developers and their stupid bugs, since I include myself proudly in that group: we developers make mistakes, all the time, often stupid ones.)
There are a few other good reasons, along the same lines, for requiring a desktop client to use access tokens rather than passwords. For one, it’s good protection against developer stupidity to simply not store the user’s username and password whenever possible. That’s just one less place from where the user’s password can be compromised and, when apps identify themselves, it’s easier to shut down that app independently for any number of reasons: the user lost their laptop, or the app is misbehaving in unintentional ways, …
Does any of this provide a defense against the worst-case attacker, a rogue desktop app? Of course not. But thinking only about the worst-case attacker misses the obvious issue in distributed software design: stupid mistakes are by far the most likely problem you’ll encounter, and it’s a good idea to design a few layers of control to address those issues when they arise.
Defense in depth. Sometimes it means hedging against your own stupidity.