mark nottingham

Nine Things to Expect from HTTP/2

Thursday, 30 January 2014

HTTP

HTTP/2 is getting close to being real, with lots of discussions and more implementations popping up every week. What does a new version of the Web’s protocol mean for you? Here are some early answers:

1. Same HTTP APIs

Making HTTP/2 succeed means that it has to work with the existing Web. So, this effort is about getting the HTTP we know on the wire in a better way, not changing what the protocol means.

This means HTTP/2 isn’t introducing new methods, changing headers or switching around status codes. In fact, the library that you use for HTTP/1 can be updated to support HTTP/2 without changing any application code.

That said, there might be some new “bumps” on APIs that allow you to fine-tune some of the protocol’s new capabilities, making it even more efficient. These should be optional to use, however.

2. Cheaper Requests

The Web performance community’s mantra is “ avoid HTTP requests” because HTTP/1 make them expensive. This has given rise to techniques like inlining, concatenation and spriting to reduce the number of requests on a page.

With HTTP/2, these techniques shouldn’t be necessary, because one of the main goals of the protocol is to reduce the marginal overhead of new requests. Things like “ batch” methods for RESTful APIs using HTTP/2 shouldn’t be necessary either.

How? HTTP/2 uses multiplexing to allow many messages to be interleaved together on a connection at the same time, so that one large response (or one that takes a long time for the server to think about) doesn’t block others.

Furthermore, it adds header compression, so that the normal request and response headers don’t dominate your bandwidth — even if what you’re requesting is very small. That’s a huge win on mobile, where getting big request headers can easily blow out the load time of a page with a lot of resources by several round trips.

3. Network- and Server-Friendliness

HTTP/2 is designed to use fewer connections, so servers and networks will enjoy less load. This is especially important when the network is getting congested, because HTTP/1’s use of multiple connections for parallelism adds to the problem.

For example, if you phone opens up six TCP connections to each server to download a page’s resources (remembering that most pages use multiple servers these days), it can very easily overload the mobile network’s buffers, causing them to drop packets, triggering retransmits, and making the problem even worse.

HTTP/2 allows the use of a single connection per host, and encourages sites to consolidate their content on one host where possible.

4. Cache Pushing

HTTP/2’s “ server push” allows a server to proactively send things to the client’s cache for future use.

This helps avoid a round trip between fetching HTML and linked stylesheets and CSS, for example; the server can start sending these things right away, without waiting for the client to request them.

It’s also useful for proactively updating or invalidating the client’s cache, something that people have asked for.

Of course, in some situations, the client doesn’t want something pushed to it — usually because it already has a copy, or knows it won’t use it. In these cases, it can just say “no” with RST_STREAM (see below).

5. Being Able to Change Your Mind

If your HTTP/1 client sends a request and then finds out it doesn’t need the response, it needs to close the connection if it wants to save bandwidth; there’s no safe way to recover it.

HTTP/2 adds the RST_STREAM frame to allow a client to change its mind; if the browser navigates away from a page, or the user cancels a download, it can avoid having to open a new connection without wasting all of that bandwidth.

Again, this is about improving perceived performance and network friendliness; by allowing clients to keep the connection alive in this common scenario, extra roundtrips and resource consumption are avoided.

6. More Encryption

HTTP/2 doesn’t require you to use TLS (the standard form of SSL, the Web’s encryption layer), but its higher performance makes using encryption easier, since it reduces the impact on how fast your site seems.

In fact, many people believe that the only safe way to deploy the new protocol on the “open” Internet is to use encryption; Firefox and Chrome have said that they’ll only support HTTP/2 using TLS.

They have two reasons for this. One is that deploying a new version of HTTP across the Internet is hard, because a lot of “middleboxes” like proxies and firewalls assume that HTTP/1 won’t ever change, and they can introduce interoperability and even security problems if they try to interpret a HTTP/2 connection.

The other is that the Web is an increasingly dangerous place, and using more encryption is one way to mitigate a number of threats. By using HTTP/2 as a carrot for sites to use TLS, they’re hoping that the overall security of the Web will improve.

7. No More Text

One of the nice things about HTTP/1 is the ability to open up telnet, type in a request (if the server doesn’t time out!) and then look at the response. This won’t be practical in HTTP/2, because it’s a binary protocol. Why?

While binary protocols have lower overhead to parse, as well as a slightly lighter network footprint, the real reason for this big change is that binary protocols are simpler, and therefore less error-prone.

That’s because textual protocols have to cover issues like how to delimit strings (counted? double-newline?), how to handle whitespace, extra characters, and so on. This leads to a lot of implementation complexity; in HTTP/1, there are no fewer than three ways to tell when a message ends, along with a complex set of rules to determine which method is in use.

HTTP/1’s textual nature has also been the source of a number of security issues; because different implementations make different decisions about how to parse a message, malicious parties can wiggle their way in (e.g., with the response splitting attack).

One more reason to move away from text is that anything that looks remotely like HTTP/1 will be processed as HTTP/1, and when you add fundamental features like multiplexing (where associating content with the wrong message can have disastrous results), you need to make a clean break.

Of course, all of this is small solace for the poor ops person who just wants to debug the protocol. That means that we’ll need new tools and plenty of them to address this shortcoming; to start, Wireshark already has a plug-in.

8. It’ll Take Some Time to Get it Right

HTTP/2 isn’t magic Web performance pixie dust; you can’t drop it in and expect your page load times to decrease by 50%.

It’s more accurate to view the new protocol as removing some key impediments to performance; once browsers and servers learn how and when to take advantage of that, performance should start incrementally improving.

This is borne out by a number of early studies of SPDY and HTTP/2, which show conflicting results about the benefits of the new protocol. Browser and server implementations are still evolving quickly, and sites are still configured and composed with the limitations of HTTP/1 in mind.

Furthermore, the downside of HTTP/2’s network friendliness is that it makes TCP congestion control more noticeable; now that browsers only use one connection per host, the initial window and packet losses are a lot more apparent.

Just as HTTP has undergone a period of scrutiny, experimentation and evolution, it’s becoming apparent that the community’s attention is turning to TCP and its impact upon performance; there’s already been early discussion about tweaking and even replacing TCP in the IETF.

9. HTTP/3 and Beyond

HTTP/1.x has lasted for more than fifteen years; why would we be even considering HTTP/3 before HTTP/2 is done?

One of the big reasons that HTTP/2 took so long to get to is that upgrading the protocol in the deployed infrastructure is really hard; there are lots of boxes out there that assume HTTP/1 will never change.

So, if the transition from HTTP/1 to HTTP/2 goes well, it should be a lot easier to introduce the next version, because we can use the same mechanism that we used for the first big hop.

Why would that happen? Right now, people are really keen to get HTTP/2 “out the door,” so a few more advanced (and experimental) features have been left out, such as pushing TLS certificates and DNS entries to the client — both to improve performance. HTTP/3 might include these, if experiments go well.

Of course, HTTP/3 might also be the version where the community cleans up all of the problems we missed this time around, but so far there seems to be a growing amount of confidence — both from the experience the community has deploying SPDY as well as HTTP/2 implementations — that HTTP/2 is getting close to done.


6 Comments

Justin England said:

Great post! I am not sure how much this stuff will affect my day to day as web programmer. I love the cache pushing and cheaper requests, which is currently manual and error proned. I get a little worried when I here about No More Text, so much of what I love about HTTP is how easy it is to debug and troubleshoot, but I am sure we can create better tooling to assist it.

Wednesday, February 18 2015 at 4:20 AM

Chris Morris said:

Nice! I’m really looking forward to multiplexing!

Thursday, February 19 2015 at 11:05 AM

Tanveer said:

Looking forward for the cache pushing feature.

Saturday, February 21 2015 at 9:49 AM

Adoram Rogel said:

When can we expect to see this in real life? Any estimates on browsers implementation? Looking forward to this!

Sunday, March 1 2015 at 4:02 AM

Dave Gillhespy said:

I’ve read some articles that say not only will the performance hacks we have done on the front-end, like concatenation, spriting, etc, no longer be necessary, they will also be detrimental to performance. If this is true, how can we optimize for both http/1 and http/2 without providing a different frontend to browsers based on support?

Thursday, March 5 2015 at 8:34 AM