mark nottingham

Strengthening HTTP: A Personal View

Saturday, 4 January 2014

HTTP Standards

Recently, one of the hottest topics in the Internet protocol community has been whether the newest version of the Web’s protocol, HTTP/2, will require, encourage or indeed say anything about the use of encryption in response to the pervasive monitoring attacks revealed to the world by Edward Snowden.

Jari Arkko, the IETF Chair, has encouraged me to write some of my thoughts about this down, to explain to a wider audience how we got here and where it might lead. So, putting aside how I personally feel about what my governments are doing to us, and focusing just on standards rather than the wider response, here goes.

A Brief History of HTTP/2

It’s amazing that HTTP/1 has been the protocol of the Web for so long, but it has become an impediment to delivering Web sites efficiently, especially in a world where mobile browsers are so common. So, spurred by Mike Belshe and Roberto Peon’s work on SPDY, the HTTPbis Working Group (which I chair) asked for proposals for something to replace HTTP/1, and we ultimately adopted SPDY as the starting point for HTTP/2. That work is coming along nicely, and I’ll write more about what HTTP/2 means for the Web soon.

When Mike and Roberto brought SPDY to us (long before “Snowden” became a household word), its implementations already required use of TLS for encryption. This was both for pragmatic reasons (it’s really hard to introduce a new version of HTTP if something in the middle doesn’t know about it) and loftier ones.

Mike in particular has always been a strong advocate for improving Web security through more ubiquitous use of TLS, and of using the new, faster protocol as a “carrot” to encourage this. When we started this work however, this idea was contentious, as some people had use cases that they felt didn’t need encryption.

As a result, neither the specification nor our charter says anything about this issue; the tacit understanding being that we’d make it possible to use HTTP/2 over encrypted or unencrypted connections, and implementations would decide what to support.

Enter Snowden

The scope of the 2013 Snowden revelations were surprising to many, and spurred a lot of discussion in the IETF. In August — just hours before our Working Group meeting in Berlin — we learned about XKEYSCORE, which apparently allows querying the world’s Web and other Internet traffic on an unprecedented scale. As a result, we held an extra session there where we held some of the infamous IETF “hums” that confirmed strong consensus to attempt improving HTTP security through the use of more encryption.

Over the next few months, the HTTP community discussed a variety of proposals for doing that, and the IETF as a whole built up steam to turn our November meeting in Vancouver into a week-long kickoff for strengthening the whole Internet against pervasive monitoring.

HTTPbis’ session there laid out a few different options for our protocol. It was very clear that we had a shared goal of increasing the use of TLS with HTTP — thereby protecting against pervasive monitoring and other attacks — but the different means of doing so attracted a lot of debate and disagreement over how we could best achieve this goal, and what the appropriate tradeoffs are.

Throughout all of this, the consistent message from the Firefox and Chrome developers has been that they will only use HTTP/2 when it is protected by TLS. After talking to them, other browser folks and the IETF’s security experts, I sent an e-mail to the Working Group summarising what I thought (and still think) is the best way forward.

In a nutshell, it says that for the common Web browsing case, HTTP/2 servers will need to use TLS if they want to interoperate with the broadest selection of browsers — just as Mike and Roberto did for SPDY.

Importantly, though, we don’t necessarily need to require the use of TLS in the protocol specification itself. Let me explain.

The Role of Standards

It would be great if the HTTP specification could improve security universally with a simple requirement to use TLS. However, standards are not always the best tool for a job. While there was strong support for doing so from many, there was also very vocal pushback from some, who argued that it is inappropriate for the IETF to require encryption for all uses of HTTP. Given that disagreement, it’s very difficult to come to a decision. The IETF makes decisions based upon technical merit, on “ rough consensus and running code.” This is a very effective way to develop and document interoperable Internet protocols, but it offers no guidance on how to make what is essentially a political decision — whether or not we use HTTP/2 as a carrot to encourage security on the Web.

It’s a political decision not because doing so casts governments as attackers, but because HTTP is a deployed protocol with lots of existing stakeholders, like proxy vendors, network operators, corporate firewalls and so on. Requiring encryption with HTTP/2 means that these stakeholders get disenfranchised.

As an individual, it’s very tempting to say “that’s OK”; personally, I don’t like many of the things that happen when third parties insert themselves into network communications, no matter how well-intentioned.

For better or worse, though, the IETF does not get to define the world that our protocols are used in. We cannot require that companies don’t monitor employee behaviour, or that countries don’t watch and curb what people do. We cannot and should not try to force people — whether they be browser users, web site operators, or the network administrators in between them — to do things against their will, because in reality we can’t; all we’d do is encourage people to stay on HTTP/1.1, or develop yet other competing protocols, or subvert the existing protocols in yet new and more damaging ways.

The Tussle

What we can do, however, is provide the flexibility and precision in the protocol to allow this discussion, this push and pull amongst the stakeholders — end users, web servers and the networks in between them — to play out.

This is called “The tussle,” an idea introduced in a seminal paper from 2005. I (and many others) believe that facilitating the tussle, rather than trying to overtly influence society, is the most appropriate role of standards (keeping in mind that as individuals, we can do things other than write standards!). On the Web, there are a number of places where we could do a better job at this.

For example, in the current design of HTTP the decision as to whether to use encryption is completely up to the server; the only thing the user can do is observe whether a URL is “HTTP” or “HTTPS” (or maybe watch a lock icon) and decide whether they can continue surfing.

A more balanced Web would allow clients to give input into this decision too, with some carrot to entice servers to support encryption — such as only supporting HTTP/2 when it is encrypted, just as Firefox and Chrome are doing. The important thing here is that the requirement is made by browser vendors as independent actors (or ideally as end users making choices, but that’s another blog entry), not imposed by the standard. In other words, our job, I think, is to facilitate the tussle, not predetermine its outcome.

What’s Next

In a few weeks, the HTTPbis Working Group will be meeting in Zurich to discuss all of this in more detail. I’ve asked for proposals to address the core issue, and so far the only one made is essentially the status quo — to specify the protocol and let implementations decide whether they’ll support it without TLS. Even the browser vendors who want to just ship TLS aren’t (yet) proposing that we require it in the specification.

If that stands, the HTTP/2 specification itself won’t require the use of TLS, even though many (or possibly all) browsers will do so for the new protocol.

The interesting thing here is that browser vendors that require TLS will feel other kinds of pressure — to allow corporate proxies to inspect traffic, to improve the CA system that TLS relies upon, to make HTTP2-over-TLS scale well, and so on. When that happens, we need to make sure that the response is coordinated, to assure that the protocol remains interoperable and doesn’t fragment.

Also, our security-related discussions don’t end there; TLS man-in-the-middle is becoming more prevalent, and worryingly, legitimate. There’s also still some interest in “ opportunistic encryption” to combat pervasive monitoring more incrementally. As you can probably tell, none of this is set in stone, and I may be reading the tea leaves incorrectly; there’s still a lot of discussion that needs to take place.

As always, thoughtful contributions are welcome here or on the group’s mailing list.


One Comment

https://me.yahoo.com/acdha#7dbaf said:

HTTP is a deployed protocol with lots of existing stakeholders, like proxy vendors, network operators, corporate firewalls and so on. Requiring encryption with HTTP/2 means that these stakeholders get disenfranchised.

Is making TLS a standard feature really dictating what people do or simply requiring that they do so in a less risky manner than relying on MITM attacks? The complaints really sound like a request that the internet as a whole subsidize legacy design decisions rather than a change in actual technical capabilities because, unlike during the 90s, it’s now extremely common to centrally manage things like proxy server configuration. Anyone with corporate compliance, auditing, etc. requirements already has to mandate proxy usage and block all non-proxied traffic to keep malware or rogue employees from using TLS to bypass monitoring so this seems like a non-issue for most of that world as long as the proxy vendors don’t take too much time to get stable support.

The other major need doesn’t really have anything to do with HTTP/2 but would solve a number of annoyances, which is a standardized mechanism to add local policy requirements to something like 802.X/WISPr-style network authentication so e.g. a computer booted for the first time in China or the UK could display a standard dialog indicating that all traffic is required to go through a government proxy. That’s horrible from a privacy standpoint but it’s not something which can be addressed with technical solutions and making it both reliable and explicit would at least reduce the chances of it being abused by criminals the way WPAD can be abused.

Sunday, January 5 2014 at 1:35 AM