OK, let’s work to make SSL easier for everyone

So in the wake of the FireSheep situation, which I described yesterday, the tech world is filled with people talking past each other on one important topic: should we just switch everything over to SSL?

As I stated yesterday, I don’t think that’s going to happen anytime soon. I would love to be wrong, because certainly if we could switch to SSL for everything, the Web would be significantly more secure. I just don’t think it’s going to be that easy. But let’s explore this a bit, because I think most people agree that there would be tremendous benefits.

A number of folks are saying “SSL is too expensive.” Others are saying “Google did it, they say it’s 1% overhead, you’re lying.” The main reference people are using for that latter claim is a fascinating presentation by Adam Langley of Google entitled Overclocking SSL. The gist of it is that, using only software, Google gets the overhead of SSL to be 1% CPU and 2% network. That sounds pretty cheap. That said, I’m skeptical. I’m far from the SSL configuration expert, of course, but I don’t think Adam Langley’s presentation paints a complete picture of the situation:

  1. per-request overhead, or per-user-visit overhead? When Google says “1% of CPU and 2% of network,” do they take into account the significantly increased number of requests due to reduced browser caching? Specifically, when going over SSL, browsers tend not to cache things like JavaScript files, images, etc… So when you click from page to page, your browser re-downloads a whole bunch of additional files on each click that they would not download if the same site were visited over HTTP. The server has no control over this. So, I suspect Google is looking at each request they get, and saying the SSL portion accounts for 1% CPU and 2% network. But, they’re probably not telling us how many extra requests overall they’re getting by user visit. I suspect it’s quite a bit higher, on the order of 300-400% the number of total requests per user visit, simply because those additional files don’t get cached. And what’s worse, those un-cached requests are typically large files, like graphics.
  2. fancy protocol tweaks. Google is doing all sorts of fancy things to reduce the complexity of the SSL negotiation. That’s awesome. But it looks like I need to upgrade to an experimental version of Apache to get all those tweaks. Also, some of the recommendations in Adam’s presentation, e.g. “don’t make SSL_write calls with small amounts of data”, are very difficult for typical web developers to address, since they usually don’t control their web pipeline that well. Finally, it looks like Google has patched OpenSSL to be more efficient. Awesome. Can we see that patch? I’m sure Google has done a fantastic job on all of these protocol, algorithmic, and implementation optimizations. But these are not within the reach of most developers, even good developers.

Now, I’m not an SSL-naysayer! I would love to see SSL deployed everywhere. I just think we need to look at the hard data regarding the overhead this will create for companies and for consumers (no caching = increased bandwidth requirements). There’s one way forward I’d love to see happen: Hey Google, how about open-sourcing all of those tweaks in one super awesome SSL proxy that we can all install blindly in front of our HTTP-only sites? This proxy should implement the latest protocol tweaks, buffer the content in appropriately sized chunks, optimize the algorithm negotiation depending on the underlying hardware, etc. Then we can all experiment with this software, see how it affects performance, and make truly informed decisions about switching to SSL everywhere.

As a side note, I wonder whether one reason Google switched its main search UI to AJAX is that it gets around the issue of re-downloading static files over SSL, since JavaScript and graphics stay in place while only the raw results are updated… That is certainly one useful way to keep things snappy over SSL!


Posted

in

,

by

Tags:

Comments

5 responses to “OK, let’s work to make SSL easier for everyone”

  1. shaver Avatar
    shaver

    Sites can indicate that they want SSL-served content to be cached, with appropriate headers. I’m pretty sure that’s supported by all widely-deployed browsers now (FF 3.5+, IE8+, etc.)

  2. Ben Adida Avatar

    That’s not what I’ve seen, but it would be extremely cool if I were wrong. I would love to be wrong. Can you confirm re: FF?

  3. Dan Avatar

    I find
    ExpiresActive On
    ExpiresDefault “access plus 1 year”

    Header unset Pragma
    Header unset Cache-Control
    Header unset ETag
    FileETag None

    # For Firefox 3+ (and others?) when using https
    Header append Cache-control “public”
    works in all modern browsers (as long as Firebug doesn’t break stuff)

  4. Dan Avatar

    I find
    ExpiresActive On
    ExpiresDefault “access plus 1 year”

    Header unset Pragma
    Header unset Cache-Control
    Header unset ETag
    FileETag None

    # For Firefox 3+ (and others?) when using https
    Header append Cache-control “public”
    works in all modern browsers (as long as Firebug doesn’t break stuff)

  5. Oilpaintings Avatar

    I find
    ExpiresActive On
    ExpiresDefault “access plus 1 year”

%d bloggers like this: