mark nottingham

HTTP Roundup: What’s Up with the Web’s Protocol

Friday, 1 October 2010


I’m going to try to start blogging more updates (kick me if I don’t!) about what’s happening in the world of HTTP.


The effort to revise the core HTTP specification (RFC 2616) is going nicely, albeit slowly. Given the nature of the work, slow is better than rushed.


In late July, some of the HTTPbis WG editors met in (then) sunny Münster before IETF78 in Maastricht. The result of that were drafts -10 and -11, closing a number of issues (with the notable exception of #95, which is being discussed still).

These drafts are starting to look very good, and I’m regularly pointing people to them instead of 2616, as they resolve a number of questions that folks come up with.

HTTPbis was also rechartered this week, to update our milestones and explicitly add selective parts of RFC2617 and RFC2817 to our scope (specifically, the authentication framework and CONNECT/Upgrade, respectively). These things arguably didn’t require a new charter, but it was agreed it’s best to be explicit, especially since our Area Director has decided not to go for another term.

We won’t be meeting in Beijing, but some of the editors are likely to be working on the documents in the background at the W3C TPAC in Lyon. Ping us if you’d like to chat there.

Header i18n

In August, RFC 5987 was published as a Proposed Standard to describe how to use non-ASCII (technically, non 8859-1) strings in HTTP header parameters. This was covered briefly in RFC 2616, but wasn’t specified with enough detail (because it was specified in terms of MIME, and HTTP is just MIME-like enough to be dangerous).

It’s important to realise that a given header has to nominate the use of this encoding for it to be used; i.e., a receiver shouldn’t just try to decode all header values. One such example is the Content-Disposition header, which HTTPbis is just going to Working Group Last Call with.

Kudos to Julian for this effort.


In their never-ending quest for better performance, browser vendors have started looking at HTTP pipelining again recently. While it doesn’t solve all of the issues around head-of-line blocking, it does help the 80% case on the Web — GETting a page and then a lot of linked assets from it.

However, pipelining is difficult for clients to support safely. While SPDY is a very interesting effort, I’d like to see if we can fix this using the current infrastructure. I talk more about this in a new Internet-Draft, “ Making HTTP Pipelining Usable on the Open Web”, which Patrick McManus is looking at getting into Mozilla as part of their overall approach to this issue.

In the meantime, if you’d like to see how your current browser does (or doesn’t) pipeline requests try pointing it at an instance of this little test script.


htty is a very cool new tool by Nils Jonsson that turns the Web into an interactive TTY, and it makes debugging fun!

I’d like to see it take the observation that Unix pipes are RESTful to its logical conclusion. Of course, it would also be cool to see REDbot’s analysis available in there as well…

The Web Linking draft has gone to a second Last Call, to get consensus around some changes to the interaction between the Experts and IANA. The biggest visible change to users is that the XML format isn’t defined in the spec, but rather by IANA, to integrate better with their toolchain.

See this message for more information.

Captive Portals

Another recent draft tries to make the situation with so-called “captive portals” — e.g., that annoying login page at the airport, in the hotel and around the conference — more workable, at least at an HTTP level.

By specifying a unique HTTP status code for “Network Authentication Required” we can hopefully avoid your software updates, feed reader and other non-browser software from getting messed up by them.

I’m talking to a few folks who help build captive portal software about this in the background, as well as the Wireless Broadband Alliance. Any introductions (as well as spec feedback) appreciated.


Finally, I can’t end without mentioning I’ve been busy adding a few HTTP features (e.g., Date headers, Expect/100 Continue support, Trailers) to node.js. Add me to the list of people banging on about it; more on that soon.

That’s all for now.

P.S. If you want to subscribe to just this sort of post, try the HTTP category of posts, linked below.


Julian Reschke said:

Maybe it’s worth mentioning that the picture in the building is not where we met, but the view from the greenbytes offices we were sitting in :-).

Friday, October 1 2010 at 7:07 AM

Nils Jonsson said:

Thanks for the shout-out about htty.

I’m intrigued by the remark, “I’d like to see it take the observation that Unix pipes are RESTful to its logical conclusion.” If you wouldn’t mind, please elaborate, so that I can take your ideas into consideration for future releases.

Saturday, October 2 2010 at 1:13 AM

tamberg said:

Hi, nice roundup. You might want to have a look at Yaler - a simple, open, and scalable relay infrastructure ( and YalerTunnel, which provides generic HTTP tunneling via Yaler ( Cheers, tamberg

Saturday, October 2 2010 at 12:25 PM

Roy T. Fielding said:

Dude! Seriously, Unix pipes and REST is a fully formed idea. That’s one of the primary goals behind emphasizing self-descriptive messaging for intermediaries.


Monday, October 4 2010 at 6:32 AM

Ian Bicking said:

I’d be interested in your thoughts on HTML Resources: (not sure if that’s the best link) – kind of addresses the pipelining concerns at a different level of the stack.

Friday, October 8 2010 at 5:58 AM

mileslibbey4#3c9aa said:

Is it time to kick you about more frequent updates?

Thursday, February 10 2011 at 4:29 AM