Friday, 23 July 2010
Thou Shalt Use TLS?
Since SPDY has surfaced, one of the oft-repeated topics has been its use of TLS; namely that the SPDY guys have said that they’ll require all traffic to go over it. Mike Belshe dives into all of the details in a new blog entry, but his summary is simple: “users want it.”
I don’t think it’s that simple.
I trust my ISP, to a point; I have a business relationship with them, so I don’t worry too much about them doing traffic analysis on what I surf and when I surf it. Likewise, they have a business relationship with their transit providers, and so on, right on to the Web sites I surf. Sure, it might go through a peering point or two, but the fact is that end to end, there is a series of trust relationships that are somewhat transitive; it’s how the Internet — a network of networks — works.
These relationships work pretty well; the Internet has been routing around technical and not-so-technical problems for a long time now. And, looking at the threat profile of the modern Web, this is borne out; the vast majority of attacks on the Web are on the endpoints; either in the browser, on the OS, or on the server, or some combination of these.
Let’s replay that; the vast majority of vulnerabilities and actual issues on the Web will not be improved one bit by requiring every Web site in the world to run TLS.
However, I can imagine there are a few situations where allowing the user, rather than the server, choose whether to use SSL might be helpful.
- If I’m accessing the Web over an untrusted wireless connection, I probably don’t want even the more innocuous traffic overlooked; many sites still don’t use SSL, and their cookie-based authentication can be replayed.
- Likewise, if (in the words of Bad Lieutenant’s Harvey Keitel) I Do Bad Things — for whatever that means in my current context — I probably don’t want my neighbour / family / boss / government looking over my shoulder.
In both of these cases, however, it’s less intrusive to establish a trust relationship with a third party — e.g., using a TLS-encapsulated HTTP proxy, or a full VPN — and use that service to avoid these issues. Both approaches are usable today.
The fact that these services aren’t taking off like gangbusters tells me that Mike’s “the users want it” isn’t the whole story.
The other half of the story is the lost opportunities of making TLS mandatory.
The Web is built upon intermediation — whether it’s your ISP’s proxies, your IT department’s firewalls and virus checkers, Akamai’s massive farms of content servers, or the myriad other ways people use intermediation (yes, that’s a plug for my latest talk). SPDY is not intermediary-friendly for several reasons, but wrapping it all in mandatory TLS makes it a non-starter. Mike’s assertion that use of proxies is “easing” isn’t backed by any numbers that I’ve seen.
Secondly, the server-side cost of TLS is still an issue for some. Sure, if you’re Google or another large Web shop, you can afford the extra iron and the insane amount of tuning that’s necessary to make it work. If it is as easy as Mike paints it on the server side, and if the users want it, why is TLS still relatively rare on the Web?
Mike also scoffs at those who point out that it’ll make debugging more difficult, brushing this concern aside as supporting the habits of “lazy developers.” I don’t think this is fair; the Web and the Internet took off at least in part because it was easy to debug. Those huge stacks of ISO specs didn’t win at least in part because they weren’t. Again, not everyone has the ability to hire Google rock star developers.
Obviously, the characteristics of SPDY-over-TLS works really well for Google. However, the Web is not (yet) just Google, and any big change like this is going to affect a lot of people.
Is It Political?
To me, requiring TLS in an application protocol feels like a political decision, not a technical one. Good protocols are factored out so that they don’t unnecessarily tie together requirements, overheads and complexity. “Small Pieces Loosely Joined” isn’t just a saying, it’s arguably how both Unix and the Internet were successfully built.
I’m quite sympathetic to arguments that government snooping and interference is bad — whether it’s American, Chinese or Australian — but protocols make very poor instruments of policy or revolution. Governments will work around them (either with the finesse of getting back doors in, or the brute force approach of blocking all encrypted traffic).
Can we improve things? Sure.
All of this is not to say that we can’t make things better incrementally, without resorting to the all-or-nothing approach. Starting by make SSL/TLS better, along the lines that Mike and others have talked about, is a great start; when we do have to use it, it needs to be as easy as possible, both for the end user and the server side.
First, there’s a fair amount of current interest in — and at least one group actively working on — signing HTTP responses. If we can verify the integrity of the response body and headers with low overhead, a whole class of issues goes away without adversely affecting the Web. If it’s done correctly, you’ll be able to tell at a glance whether the content you’re looking at has been changed along the way, or cached outside of its stated policy.
Second, for the cases when the user does want to opt into privacy, we need to make SSL proxies easier to use.
Finally, HTTP Authentication needs to be better. Not a big surprise, really, but Cookies are a very limited and tricky-to-get-right vessel for credentials. This isn’t an easy problem (mostly because once you start defining a new authentication scheme, you quickly find yourself boiling an ocean), but again I’d say it’s easier than requiring TLS for the entire Web.