mark nottingham

Thou Shalt Use TLS?

Friday, 23 July 2010

HTTP

Since SPDY has surfaced, one of the oft-repeated topics has been its use of TLS; namely that the SPDY guys have said that they’ll require all traffic to go over it. Mike Belshe dives into all of the details in a new blog entry, but his summary is simple: “users want it.”

I don’t think it’s that simple.

Trust

I trust my ISP, to a point; I have a business relationship with them, so I don’t worry too much about them doing traffic analysis on what I surf and when I surf it. Likewise, they have a business relationship with their transit providers, and so on, right on to the Web sites I surf. Sure, it might go through a peering point or two, but the fact is that end to end, there is a series of trust relationships that are somewhat transitive; it’s how the Internet — a network of networks — works.

These relationships work pretty well; the Internet has been routing around technical and not-so-technical problems for a long time now. And, looking at the threat profile of the modern Web, this is borne out; the vast majority of attacks on the Web are on the endpoints; either in the browser, on the OS, or on the server, or some combination of these.

Let’s replay that; the vast majority of vulnerabilities and actual issues on the Web will not be improved one bit by requiring every Web site in the world to run TLS.

I’m not saying man-in-the-middle attacks are non-existent, but changing the entire Web to run over SSL/TLS is a drastic move, and we need solid, well-defined motivation for making such a big change. People look at me like I’m crazy when I talk about having a Web without JavaScript, but I’d wager any amount of money it’s the lynchpin in several orders of magnitude more loss (whether you’re counting in dollars or units of personally identifying information) than man-in-the-middle attacks.

However, I can imagine there are a few situations where allowing the user, rather than the server, choose whether to use SSL might be helpful.

  1. If I’m accessing the Web over an untrusted wireless connection, I probably don’t want even the more innocuous traffic overlooked; many sites still don’t use SSL, and their cookie-based authentication can be replayed.
  2. Likewise, if (in the words of Bad Lieutenant’s Harvey Keitel) I Do Bad Things — for whatever that means in my current context — I probably don’t want my neighbour / family / boss / government looking over my shoulder.

In both of these cases, however, it’s less intrusive to establish a trust relationship with a third party — e.g., using a TLS-encapsulated HTTP proxy, or a full VPN — and use that service to avoid these issues. Both approaches are usable today.

The fact that these services aren’t taking off like gangbusters tells me that Mike’s “the users want it” isn’t the whole story.

The Cost

The other half of the story is the lost opportunities of making TLS mandatory.

The Web is built upon intermediation — whether it’s your ISP’s proxies, your IT department’s firewalls and virus checkers, Akamai’s massive farms of content servers, or the myriad other ways people use intermediation (yes, that’s a plug for my latest talk). SPDY is not intermediary-friendly for several reasons, but wrapping it all in mandatory TLS makes it a non-starter. Mike’s assertion that use of proxies is “easing” isn’t backed by any numbers that I’ve seen.

Secondly, the server-side cost of TLS is still an issue for some. Sure, if you’re Google or another large Web shop, you can afford the extra iron and the insane amount of tuning that’s necessary to make it work. If it is as easy as Mike paints it on the server side, and if the users want it, why is TLS still relatively rare on the Web?

Mike also scoffs at those who point out that it’ll make debugging more difficult, brushing this concern aside as supporting the habits of “lazy developers.” I don’t think this is fair; the Web and the Internet took off at least in part because it was easy to debug. Those huge stacks of ISO specs didn’t win at least in part because they weren’t. Again, not everyone has the ability to hire Google rock star developers.

Obviously, the characteristics of SPDY-over-TLS works really well for Google. However, the Web is not (yet) just Google, and any big change like this is going to affect a lot of people.

Is It Political?

To me, requiring TLS in an application protocol feels like a political decision, not a technical one. Good protocols are factored out so that they don’t unnecessarily tie together requirements, overheads and complexity. “Small Pieces Loosely Joined” isn’t just a saying, it’s arguably how both Unix and the Internet were successfully built.

I’m quite sympathetic to arguments that government snooping and interference is bad — whether it’s American, Chinese or Australian — but protocols make very poor instruments of policy or revolution. Governments will work around them (either with the finesse of getting back doors in, or the brute force approach of blocking all encrypted traffic).

Can we improve things? Sure.

All of this is not to say that we can’t make things better incrementally, without resorting to the all-or-nothing approach. Starting by make SSL/TLS better, along the lines that Mike and others have talked about, is a great start; when we do have to use it, it needs to be as easy as possible, both for the end user and the server side.

First, there’s a fair amount of current interest in — and at least one group actively working on — signing HTTP responses. If we can verify the integrity of the response body and headers with low overhead, a whole class of issues goes away without adversely affecting the Web. If it’s done correctly, you’ll be able to tell at a glance whether the content you’re looking at has been changed along the way, or cached outside of its stated policy.

Second, for the cases when the user does want to opt into privacy, we need to make SSL proxies easier to use.

Finally, HTTP Authentication needs to be better. Not a big surprise, really, but Cookies are a very limited and tricky-to-get-right vessel for credentials. This isn’t an easy problem (mostly because once you start defining a new authentication scheme, you quickly find yourself boiling an ocean), but again I’d say it’s easier than requiring TLS for the entire Web.


7 Comments

Simon Farnsworth said:

There’s a better thing Google could push for if they felt encryption everywhere was an important goal - better APIs and protocols for using IPSec.

In particular, it’s hard for an application to discover if IPSec is in use. It’s hard for an application to request use of a particular set of certificates, or find out what certificates have been used. It’s difficult to get an IPSec secured connection up and running without previous contact between the admins. And, if IPSec was everywhere, things like SPDY, HTTP, e-mail etc would not need to include their own encryption - they could use the IP layer version, just like they do for packetization already.

Saturday, July 24 2010 at 2:08 AM

Aaron Swartz said:

If using TLS is a political decision, so is not using TLS. You can’t avoid the decision.

As Adam Langley has shown, TLS hardly costs anything on a modern computer. Users don’t like it when anyone nearby can see which porn sites they visit. Sure, TLS doesn’t solve everything, but why not do what you can to help?

Saturday, July 24 2010 at 3:18 AM

Jeremiah Gowdy said:

You talk about the server side “big iron” cost of implementing TLS, but to me that argument doesn’t make much sense. The trend of buying SSL accelerators was already dying down due to the exponential growth of CPU power. Nehalem era CPUs can already push a great deal of AES without the new AES-NI instructions Intel is adding. The performance of AES-NI is going to drive that throughput through the roof.

CPU power is growing at a frenzied pace, and everyone is wondering what we’re going to do with all of these cores. Maybe we can do AES in hardware native AES instructions on just one of those cores, and probably do a few hundred Mbit/sec of TLS easily. Both on the client and the server side.

The amazing amount of tuning is install a Xeon X5680 and upgrade to an SSL/TLS stack that supports AES-NI.

http://software.intel.com/file/27067

Saturday, July 24 2010 at 4:21 AM

kl said:

I’d rather see Van Jacobson’s replacement of connection-oriented network with resource-oriented one:

http://video.google.com/videoplay?docid=-6972678839686672840#

Securing point-to-point communication doesn’t make much sense when what you’re really interested is getting particular resource - which could come from any source, as long as you know you’re getting the right data.

Resource-oriented network is a cache heaven. TLS goes in opposite direction.

Saturday, July 24 2010 at 6:57 AM

Devdas Bhagat said:

If one of my biggest threats is proxy logs, then HTTPS makes absolute sense.

A subsidiary benefit of https would be the ability to do certificate based mututal auth (signing up involves me providing my public key to the website, they provide me theirs and then the whole HTTP login mess goes away). Mutual auth will bring up the pain of certificate management, but that will hopefully be simplified by applications and operating systems.

Roughly, I would like to be able to use the web in the same fashion as I do with ssh. It would also be nice to be able to use the web over ssh from a remote server in a text-only browser, without it breaking due to Javascript requirements.

Saturday, July 24 2010 at 10:26 AM

Benjamin Carlyle said:

As HTTP becomes the protocol that essentially everything uses to get just about anything done, there are use cases for:

  1. Accessing resources in a way that we can verify that we have the representation we requested,
  2. Verifying that requests input to a server are the true request sent by an verifiable identity, and
  3. At least optionally, privacy for critical details of requests and responses End-to-end TLS tunnels provide a simple way to argue that we meet these use cases, and for this reason I think that things will continue to slide that way if no clear protocol alternative arises. If we were to go down a message-based encryption route we could address: (1) by canonicalising and including the URI that was requested and copies of each of the “Vary” request headers in the signed (and cacheable) representation (2) by canonicalising and signing parts of the request that the client considers important not to be modified by proxies (at least URI and method, but other headers would have to be considered on a case-by-case basis) (3) by adding encryption on top of requests and responses, which could make use of TLS or could use a public key known to all intended recipients Digest authentication or a variant thereof might deal with (2) successfully, but requires every verifiable request to pretty much be a two-pass affair with the server. This might be unavoidable to prevent replay attacks. Probably (1) could be dealt with by adding new headers in responses, but I find it difficult to foresee this happening and being deployed in a timescale and in a manner that really stops the slide towards TLS for everything.

Sunday, August 15 2010 at 6:47 AM