defending against your own stupidity

When thinking about security, it is tempting to determine the worst-case attacker and focus defenses against it. (Of course, by worst-case, I mean within the bounds of a reasonable threat model: the NSA is not a reasonable worst-case attacker for every problem.) A corollary to this reasoning goes something like this: well, I’ve already implemented shield X, and if an attacker can defeat shield X, then they can probably also defeat shield Y, so I don’t need to implement shield Y because it’s useless.

That’s misguided. There may be some very good reasons to implement shield Y.

Consider the utility of a safety parachute. A determined attacker trying to kill you will obviously sabotage the safety parachute just as easily as he can sabotage the primary one. So, does that mean you might as well jump without a safety parachute? Of course not. You want to take into account not just the worst-case attacker, you want to take into account your own stupidity. A safety parachute means that, if you packed your primary wrong, you can still live. Defense in depth, as it’s more commonly known in the security community, is usually not about building the 12 layers of security around the “Die Hard” vault that a skilled attacker has to vanquish, one by one. Defense in depth is the humble realization that, of all the security measures you implement, a few will fail because of your own stupidity. It’s good to have a few backups, just in case.

In my previous post regarding Twitter, two people I greatly respect, Miguel de Icaza and Keith Winstein, claimed Twitter’s requirement that desktop clients identify themselves with a consumer name and secret didn’t make any sense, because a malicious app could easily bypass that defense. This is true. But what about the apps that have bugs? The ones that start double posting when the date hits 9/02/10 because of some date parser bug combined with a fetish for the 1990s TV show? Or the ones that have a buffer overflow error that a remote attacker can trigger and start partially remote-controlling? How does a developer, or Twitter themselves if that developer isn’t responding, shut down that faulty pipe? The consumer name suddenly looks like a decent solution to this problem. It doesn’t work against the worst-case attacker, but it works against failure modes where the desktop app developer isn’t malicious, only a little stupid.

(I say this with all the love in the world for developers and their stupid bugs, since I include myself proudly in that group: we developers make mistakes, all the time, often stupid ones.)

There are a few other good reasons, along the same lines, for requiring a desktop client to use access tokens rather than passwords. For one, it’s good protection against developer stupidity to simply not store the user’s username and password whenever possible. That’s just one less place from where the user’s password can be compromised and, when apps identify themselves, it’s easier to shut down that app independently for any number of reasons: the user lost their laptop, or the app is misbehaving in unintentional ways, …

Does any of this provide a defense against the worst-case attacker, a rogue desktop app? Of course not. But thinking only about the worst-case attacker misses the obvious issue in distributed software design: stupid mistakes are by far the most likely problem you’ll encounter, and it’s a good idea to design a few layers of control to address those issues when they arise.

Defense in depth. Sometimes it means hedging against your own stupidity.

7 thoughts on “defending against your own stupidity

  1. Sorry, I don’tget it. If the purpose is to help application developers defend against their own inadvertent error (e.g., their application has a bug that causes to it start spamming everyone at some point in the future), then why does Twitter need a special process for open source software? Why is Twitter threatening to revoke any application whose application key becomes known? Given the purpose you describe, it’s really not a big deal if the application key can be extracted by someone anti-social. What, exactly, are they worried about? What is the failure mode that Twitter is trying to defend against, by revoking open-source apps that include their application key in the source code for everyone to see (even if no one misuses that application key)?

    • Twitter’s *policy* decision of revoking credentials as quickly as they do is, I believe misguided. I mentioned this in the comments of my previous post. In this post, I was focusing on why it’s still helpful to use consumer keys and secrets, but that is separate from when you choose to invalidate the credential set. I think Twitter may be making a mistake on the “when.”

  2. Ben, I don’t disagree with you. I do think it makes sense for Web services — not just Twitter, but Flickr, Gmail, Facebook, nytimes.com — to distinguish between different clients. HTTP already has a User-Agent string that works pretty well for this purpose! And I strongly agree with your defense of defense in depth.

    To the extent I care about Twitter at all, my objection is with the strange belief that they alone can usefully enforce this separation in a cryptographic sense. A user-agent string is one thing; trying to enforce the secrecy of the “client secret” is quite another! It’s hard to respect their seeming refusal to grapple with the difficulty of the chore or the implications for free software, upon which, of course, Twitter has built its business.

    Also it seems dickish of them to force every single client in the world to switch to OAuth, except their own Android client (and any other client that still sends “source=twitterandroid”) for which they have opened up a backdoor where basic auth is still permitted. :-)

    Best,
    Keith

    • Hi Keith,I suspect the use of a consumer secret as part of announcing oneself is meant to have everyone using the same libraries (web or desktop), and to make it difficult for an app to announce itself as a different app by mistake. If it was just a user-agent string, then it would be easy for a developer to say “oh, sorry, I just copied this code and didn’t realize I was copying the app name over, too.”That said, I definitely agree with you that Twitter’s policy of revoking access immediately for “compromised” secrets is a bad idea, as it provides an incentive for bad players to just reveal apps’ secret keys. It’s one thing to nudge developers in the right direction. It’s another thing to make their lives harder by opening the door to bad actors handicapping them. So I agree with you there.And yes, way not cool to have their own client keep using Basic Auth. If they needed more time, they should have waited longer to do the switchover.

    • I would hope that the fact that the user-agent string (“source” in Twitter jargon) is prominently featured in the UI would be enough for honest clients to take the time to get it right, without Twitter needing to play this dumb game of forcing you to keep your keys secret.

  3. Pingback: OAuth Bearer Tokens are a Terrible Idea « hueniverse

Comments are closed.