Wednesday, 16 December 2009
HTTP + Politics = ?
Australia has apparently decided, through its elected leaders, to filter its own Internet connection.
Since many, many other people are discussing whether this is advisable or indeed effective, I’ll focus here on what this will do to HTTP, and by extension the Web.
What’s on the Table
Reading the white paper, there are three different technologies for filtering the Web on the table;
- “pass-by hybrid”, whereby the ISP’s router will shunt traffic to target IP addresses to another box for inspection and denial, if appropriate,
- “pass-through filters”, which are essentially packet inspection tools, working solely at the TCP/IP level but peeking into application-layer semantics, and
- proxies, as we know (and usually love) in the HTTP world.
Most of the ISPs that participated in the pilot chose the “pass-by hybrid” solution, for the very good reason that it doesn’t require an ISP to shove all of their traffic through a single box and hope it can keep up, thereby supporting claims that filtering won’t hurt Web performance.
However, if a site’s IP address is on the list, it does get sent to another box. Presumably, this is a box that acts as a pass-through filter or a proxy, so it inherits their problems for those sites. Given that some of those sites are likely to be YouTube, Flickr and so on, this isn’t just a corner case.
Pass-through filters need to be able to parse the entire request stream to pull out request-URIs and make a filtering decision. When they’re not blocking a URL and not overloaded, presumably they’ll perform adequately.
The interesting part comes when they do decide to block a URL. A simple implementation will presumably just block the HTTP response and splice in a canned, generic “blocked” one. However, that will break — sometimes spectacularly — a client that’s doing HTTP pipelining.
For example, if Alice and Bob are behind a corporate proxy which is pipelining away through a pass-through filter, and Alice makes a request to get blocked content, it can affect Bob’s request. Worse, if Bob requests a blocked URL after Alice does, a naive implementation could block Alice’s request.
The only way to properly block requests like this is to keep state about the request and the response around, so as to assure that you’re inserting the “blocked” response in the right place. In other words, you might as well be a proxy.
I will grant that pipelining isn’t widely used on the open Internet (although Opera does use it, and FireFox can be convinced to), but I can’t help but see the irony, given that it is one of the primary techniques for speeding up an HTTP connection — especially over long distances, which I hear we have in abundance down here.
Proxies, for better or worse, are a much more well-understood beast. Generally, you’re at the mercy of a proxy; if it decides to forbid certain HTTP methods (as is common), you can’t use them. If it doesn’t support Upgrade, Expect/Continue or chunked encoding, you won’t be able to use these HTTP features.
What this Means for the Web, and Australia
People don’t just use HTTP for browsing Web pages any more; it’s used for everything from desktop weather widgets to major system software updates to online gaming to document editing. People are also using HTTP in weird and wonderful ways to get things like Comet, BOSH and WebSockets happening.
By forcing ISPs to deploy middleboxes — without regard to protocol conformance or impact on these uses — we’re effectively profiling what people can do on the Web in Australia. This hurts the Web’s ability to grow and evolve, and it hurts Australia, by putting us at a competitive disadvantage to the rest of the world.
Furthermore, if “additional content” is filtered by ISPs, that means that — by the government’s own calculations — somewhere around 3% of HTTP requests will either get a non-standard error page, or mysteriously drop connections.
Think about that for a second; depending on how it’s calculated, you could easily be looking at several blank Web pages throughout your day, and sometimes your iPhone apps, your desktop widgets, your software updates just won’t work for some reason.
Companies like Google, Yahoo!, Amazon and Akamai spend lots of time and money making the Web go faster. While the white paper claims that filtering doesn’t slow the Web down in their tests, this ignores the opportunity cost that it introduces. Optimising YouTube, Flickr, GMail or any other performance-sensitive site is going to be much more difficult through a morass of content filters.
It’s true that Web sites already have to do with a multitude of proxies and other middleboxes on the open Internet anyway, but the difference is that if users don’t like what an ISP does to their packets, they can walk with their feet. There is no such option when the middlebox is mandated.
If the Government persists in mandating these filters (again, I’m just looking at the technical side here!), there are a few things that they can do to help, including:
- Mandating pass-by hybrid solutions with pass-through filters — clearly, this is the least intrusive means of filtering. Not a great solution, but much less damaging than proxies. If cost is an issue for small ISPs, the Government should invest in developing Open Source solutions.
- Mandating protocol conformance — when filters do intervene in protocol streams, there need to be assurances that they’ll do so in a conformant fashion; ideally, a test suite.
- Standardising a “blocked” response — e.g., a 502
Bad Gatewayresponse with a descriptive header. This way, non-browser clients can detect when they’ve been blocked.
One final thought. What will the Government’s reaction be once sites start deploying protocols like SPDY, which are going to be much less amenable to filtering, but much more powerful? Will we block them completely, thereby shutting ourselves off from the rest of the world?