Ryan Paul over at ArsTechnica claims a compromise of Twitter’s oAuth system, but fails to demonstrate such a compromise. It’s unfortunate, because some of his comments are indeed worthwhile, and there are a few interesting recommendations that Twitter should follow (hah, no pun intended). But what we have here is not a “compromise”, and the citation-and-reasoning-free fear-mongering about oAuth is poor reporting.
the consumer secret is not important
The article’s main argument is that the oAuth consumer secret is embedded in desktop clients and can be extracted. Yes. That sounds really bad doesn’t it? Except, as the article itself says:
It’s very important to understand that a compromised consumer secret key doesn’t jeopardize the security of the users of the application. The key can’t be used to gain access to the accounts of other users, because accessing an individual account requires an access token that individual instances of the client application obtain automatically on behalf of the user during the authorization process.
Ahah, you say, but the article also says:
the problem is how Twitter is using the key
Oh no! What are they possibly doing with this key that they shouldn’t be doing?
These keys are particularly significant because Twitter has configured them to enable access to special APIs which aren’t generally available yet that can be used to exchange login credentials for an access token
So in other words, with a consumer secret, you can exchange a username/password for an oAuth access token. That’s not ideal, because it means that we’re back to clients proxying usernames and passwords, but how is that a compromise? If an app successfully captures your username and password, and the app is evil, you’re screwed already! In fact, it’s probably a good thing that Twitter then pivots and says “ok, fine you got the password, but can you please just exchange it for this access token now and not store the password?” Furthermore, Twitter even says that this approach, which they call xauth, is the “least desirable way to authenticate,” and they require apps that want to use this feature to request special approval.
I don’t see a security compromise here. I see a complicated set of design issues on which Twitter is trying to make practical security decisions. It’s certainly reasonable to disagree with them, but it’s unwarranted to claim that oAuth security has been “compromised” or that Twitter made this decision “against all reason.”
spamming is a darn good explanation
The article says:
The issue here is that Twitter wants to use the keys as an abuse control mechanism to centrally disconnect spammers and other unwanted users of the service, but OAuth was simply not designed to be used for that purpose. The idea is that centrally disabling a spammer’s consumer secret key will lock out all of the spammer’s user accounts, theoretically simplifying spam control for Twitter. It’s unlikely that this naive strategy will work in practice, however.
Any spammer with a hex editor can trivially compromise the keys of popular applications and use those keys to evade Twitter’s abuse controls.
Ummm, no. As the article itself states, a spammer who takes another application’s keys still doesn’t have the access tokens, and so it cannot spam.
In fact, spamming is a very good reasons for Twitter’s use of consumer keys and secrets even in desktop clients: a spammer in this case is going to be an application that has legitimately authorized a bunch of users, and then decides to go crazy and spam their feeds. In that scenario, Twitter can very easily temporarily disable that one consumer key+secret, or throttle it, or kill it altogether, without having to trace down all the access tokens that consumer generated. I like this reason a lot.
increase the risk of phishing? No
In the past, I’ve been critical of OpenID for increasing the risk of phishing. So one might think I would agree with the article when it says:
Individual implementations aside, the general concept behind OAuth’s redirection-based authorization process materially increases the risk of phishing. The people behind the standard are fully aware of that fact, but they don’t believe that the issue should necessarily be addressed by the standard itself.
I think that’s wrong. There’s a key difference here: what’s the alternative to oAuth? The alternative is the password anti-pattern, where all those third-party apps capture your username and password. So sure, it would be good if oAuth providers had more phishing-resistant login mechanisms, say like BeamAuth (shameless plug). But, on the whole, oAuth is still a heck of a lot less phishing-prone than the password anti-pattern it’s trying to replace.
open-source and DOS: fair points
override the callback URL? That’s a good point
The article claims that an attacker can change the callback URL in a Twitter oAuth session, which would indeed enable a very sneaky phishing attack. I didn’t think that was the case, but it may be. I agree with the article’s take on this:
Ideally, OAuth implementors should require application developers to supply the callback address when they configure their key and should not allow that setting to be overridden by the client application in a request parameter. Twitter has a field in the key configuration that allows the developers to specify a default, but they still allow client applications to use the dangerous callback override parameter.
I think the article is right on this point. There are many reasons to disallow callback overrides, and there’s no good reason for it: oAuth web apps need cookie-based session support anyways to complete the oAuth process, there’s no need to dynamically add state to the callback URL. It would be good to do away with this “feature.”
a few very good small points
The article makes some very good smaller points that Twitter (and other oAuth providers) should heed:
- debugging oAuth is hard, and having more explicit error messages is a good idea
- logging out is confusing: if you log out of the app, are you logged out of the oAuth identity provider? Flickr does this very well with Yahoo ID integration, so oAuth and Twitter should indeed provide more guidance on this front.
- giving users more cues for trusting certain apps is probably a good idea. This can evolve over time, though, and the protocol won’t need to change.
These are all useful points. But they’re small, and they do not warrant a big scary title.
all hail oAuth 2.0!
The most confusing part of the article is the constant harping on oAuth 1.0a as “immature” when it has been deployed by dozens of high-profile providers, and the constant praise of oAuth 2.0, which has only just been released and actually weakens security to make developers’ lives easier (I’ve written before about the oAuth 2.0 problems). It’s possible that oAuth 2.0 is much better in ways I haven’t considered, but where’s the evidence? The article doesn’t say.
The article concludes:
Although I think that OAuth is salvageable and may eventually live up to the hype, my opinion of Twitter is less positive. The service seriously botched its OAuth implementation and demonstrated, yet again, that it lacks the engineering competence that is needed to reliably operate its service. Twitter should review the OAuth standard and take a close look at how Google and Facebook are using OAuth for guidance about the proper approach.
There’s no evidence of this so-called “botched” job. Sure, some smaller points are absolutely worth considering, and I applaud the author for digging deep on some of these. But I find unfortunate the overall tone of the article, the baseless and gross exaggerations, and the condescending writing that ignores reasonable justifications for Twitter’s decisions.