mnot’s blog

Design depends largely on constraints.” — Charles Eames

Friday, 13 November 2009

Will HTTP/2.0 Happen After All?

A couple of nights ago, I had a casual chat with Google’s Mike Belshe, who gave me a preview of how their “Let’s make the Web faster” effort looks at HTTP itself.

SPDY (nee FLIP) is an alternate application protocol that’s in Chromium, but buried so deeply that you have to enable it with a command-line option (—use-flip). AFAICT there aren’t even any public servers that support it yet, but it’s still a very exciting development.

Why? In a nutshell, it’s a binary, frame-based protocol for multiplexing bidirectional data streams over TCP (to start with). See flip_protocol.h for an idea of what it looks like, as well as the whitepaper.

HTTP’s Limits

HTTP-over-TCP has some pretty basic limits; most seriously, you can practically only have one request or response in flight on a connection at the same time.

Pipelining was designed to alleviate this, but at best it’s only a partial fix (head-of-line blocking is still an issue), and implementation problems means it’s almost unusable on the open Web (although Serf has had success in using pipelining in Subversion). It also can’t be used for methods like POST, which is important for interactive applications.

This drives people to use multiple, parallel TCP connections — something that we’ve accommodated in HTTPbis by lifting the two-connection limit for clients. However, that’s not a great solution either; TCP doesn’t allow you to share connection state between them, which brings problems when dealing with congestion.

What about WAKA?

These problems are well-known and have been discussed for years, all the way back to HTTP-NG, WebMUX and other efforts. More recently, Roy Fielding has been working behind the scenes on WAKA, with similar goals. So similar that I had to smile when Belshe explained what they were doing; it’s very similar to how Roy explains WAKA’s use of the transport.

However, I wouldn’t say that SPDY is competing with WAKA — yet. Belshe goes out of his way to point out that SPDY is more about doing real-world experimentation rather than saying “this is the protocol we’ll use.” In his words;

We're hoping to put theories to the test; while many of the ideas are not new, we're aggregating them, making them cooperate together, implementing them, and then measuring them. We hope that others will appreciate and expand this effort so that we can all evolve toward a protocol we think is universally better in a relatively quick timeframe.

In other words, they seem to be positioning this as input to the eventual design of HTTP/2.0, WAKA or whatever, rather than a browser-specific push to define a new protocol alone.

… and the IETF

The other interesting aspect, of course, is the relationship to WebSockets, especially since there was a pretty strong sense in the IETF earlier this week in Hiroshima that a Working Group to standardise it should be started. if SPDY really does eventually follow the path of WAKA, it could be that some HTTP-like use cases that people have planned for WebSockets may have another outlet instead.

Finally, you might ask what bearing this has on our efforts in HTTPbis. Right now, the answer is “nothing”, in that we’re chartered explicitly NOT to create a new version of HTTP. However, I think that our work — especially in splitting up the spec (a decision driven by Roy a long time ago) — will help any eventual successor protocol, whether it be WAKA, SPDY, their child or something completely different.

That’s because the minimum bar to entry for replacing HTTP/1.1 is to exactly support its semantics and capabilities, while making it more efficient. The fact that all of the wire-level goop in HTTP is now moving to a single, separate document helps that.

The last thing that I’d mention is that when we started HTTPbis a couple of years ago, there was a strong sentiment against creating a new protocol, both because of the can of worms it would open, and because of deployment problems in doing so. However, I’ve recently heard many people complaining about the limitations of HTTP over TCP, and it seems that one way or another, we’re going to start tackling that problem soon.


Filed under: HTTP Protocol Design Web

6 Comments

Sam Johnston said:

Sounds great. Surendra Reddy et al at Yahoo! Are looking at performance issues in the context of cloud computing so might be worth talking to them too (and cloud folks in general).

Don't forget about the semantic stuff I discussed in my recent HTTP/2.0 post too - linking, categories, attributes and anything else we'd need to make HTTP useful as a meta-model without having to resort to envelopes like Atom or SOAP. OCCI is already pulling a lot of this together for cloud so it may be a useful input.

Oh and UTF-8 et al...

Friday, November 13 2009 at 6:37 AM +10:00

Alfred Hoenes said:

Sounds like reconnaissance.

But why not using existing protocols?

There once was TMUX for parallelism.

Today we have SCTP with independent streams, plus support for
multi-homing and changing IP addresses (mobility!), plus
congestion control designed-in from the beginning, a very
small document set (remember the 60+ RFCs for TCP!)

Has anybody ever considered that way?

'HTTP++' over SCTP might become the killer app to thrash lazy
middlebox vendors to catch up with modern transport protocols.

Friday, November 13 2009 at 9:11 AM +10:00

Koranteng Ofosu-Amaah said:

So Mark, as a middleman par excellance, care to comment about the implications for things like caching. Using SSL like SPDY does implies reduced visibility for proxies, caches and other intermdiaries. I find it interesting that this comes at a time when interesting extensions and innovations in caching seem to be gaining wider deployment.

A big part of HTTP 1.1 was about improving latency and increasing visibility to intermediaries. The more successful parts were the efforts to enable caches; the addition of HTTP pipelining which is valuable in dealing with latency has not been a similar success because of deployment issues and older, buggy and recalcitrant middlemen - the proof being that most clients don't have pipelining enabled by default.

(Enabling pipelining is the first thing that I do in any browser, but I'm a curmudgeon that way. I don't experience any issues in my web use.)

Pipelining of course isn't a silver bullet when it comes to dealing with latency but it helps tremendously.

It seems that there's a race of sorts and that new protocols like SPDY have a window of opportunity for adoption until pipelining is be turned on by default in Firefox or some client with large market share. (bug 264354 in Mozilla has been open since 2004).

More generally though, does the benefit of a more efficient wire protocol outweigh the downsides of relaxing the layering constraint and discarding a fairly robust ecosystem of caching intermediaries?

It's worth asking Roy, I'd bet that the ever elusive WAKA aims to leverage existing caches.

I'm rooting for the caches, I like middlemen.

Friday, November 13 2009 at 9:18 AM +10:00

Bill de hOra said:

Koranteng, nice to read from you again :)

"It seems that there's a race of sorts and that new protocols like SPDY have a window of opportunity for adoption until pipelining is be turned on by default in Firefox or some client with large market share. (bug 264354 in Mozilla has been open since 2004)."

I'll hazard a guess and say the underlying drive for the race is technology strategy for mobile and real time delivery. It seems those who want binary optimized protocols (a faster mobile web), don't believe in Gilmor's law or suspect its effect is dwindling a la Moore's. Of course this is the same argument as http being not at all suitable for the web either side of the millennium, except now the domain is different and the stakes are higher - mobile is the great game.

Saturday, November 14 2009 at 6:55 AM +10:00

antony said:

a question i was wondering about but would google's search patents be in jeopardy if a new http contender was made? if so could this be the motive behind generating future http protocols and there by controlling them and making them faster to the point no one else concentrates on them.

Saturday, November 21 2009 at 3:59 AM +10:00

Mark Nottingham said:

Huh? I doubt that their patents are tied to the specific wire protocol used, keeping in mind that whatever eventually replaces HTTP will still be very HTTP-like, and "the Web."

Saturday, November 21 2009 at 5:40 AM +10:00

Creative Commons