mnot’s blog

Design depends largely on constraints.” — Charles Eames

Wednesday, 16 December 2009

HTTP + Politics = ?

Australia has apparently decided, through its elected leaders, to filter its own Internet connection.

Since many, many other people are discussing whether this is advisable or indeed effective, I’ll focus here on what this will do to HTTP, and by extension the Web.

What’s on the Table

Reading the white paper, there are three different technologies for filtering the Web on the table;

Most of the ISPs that participated in the pilot chose the “pass-by hybrid” solution, for the very good reason that it doesn’t require an ISP to shove all of their traffic through a single box and hope it can keep up, thereby supporting claims that filtering won’t hurt Web performance.

However, if a site’s IP address is on the list, it does get sent to another box. Presumably, this is a box that acts as a pass-through filter or a proxy, so it inherits their problems for those sites. Given that some of those sites are likely to be YouTube, Flickr and so on, this isn’t just a corner case.

Pass-through filters need to be able to parse the entire request stream to pull out request-URIs and make a filtering decision. When they’re not blocking a URL and not overloaded, presumably they’ll perform adequately.

The interesting part comes when they do decide to block a URL. A simple implementation will presumably just block the HTTP response and splice in a canned, generic “blocked” one. However, that will break — sometimes spectacularly — a client that’s doing HTTP pipelining.

For example, if Alice and Bob are behind a corporate proxy which is pipelining away through a pass-through filter, and Alice makes a request to get blocked content, it can affect Bob’s request. Worse, if Bob requests a blocked URL after Alice does, a naive implementation could block Alice’s request.

The only way to properly block requests like this is to keep state about the request and the response around, so as to assure that you’re inserting the “blocked” response in the right place. In other words, you might as well be a proxy.

I will grant that pipelining isn’t widely used on the open Internet (although Opera does use it, and FireFox can be convinced to), but I can’t help but see the irony, given that it is one of the primary techniques for speeding up an HTTP connection — especially over long distances, which I hear we have in abundance down here.

Proxies, for better or worse, are a much more well-understood beast. Generally, you’re at the mercy of a proxy; if it decides to forbid certain HTTP methods (as is common), you can’t use them. If it doesn’t support Upgrade, Expect/Continue or chunked encoding, you won’t be able to use these HTTP features.

What this Means for the Web, and Australia

People don’t just use HTTP for browsing Web pages any more; it’s used for everything from desktop weather widgets to major system software updates to online gaming to document editing. People are also using HTTP in weird and wonderful ways to get things like Comet, BOSH and WebSockets happening.

By forcing ISPs to deploy middleboxes — without regard to protocol conformance or impact on these uses — we’re effectively profiling what people can do on the Web in Australia. This hurts the Web’s ability to grow and evolve, and it hurts Australia, by putting us at a competitive disadvantage to the rest of the world.

Furthermore, if “additional content” is filtered by ISPs, that means that — by the government’s own calculations — somewhere around 3% of HTTP requests will either get a non-standard error page, or mysteriously drop connections.

Think about that for a second; depending on how it’s calculated, you could easily be looking at several blank Web pages throughout your day, and sometimes your iPhone apps, your desktop widgets, your software updates just won’t work for some reason.

Companies like Google, Yahoo!, Amazon and Akamai spend lots of time and money making the Web go faster. While the white paper claims that filtering doesn’t slow the Web down in their tests, this ignores the opportunity cost that it introduces. Optimising YouTube, Flickr, GMail or any other performance-sensitive site is going to be much more difficult through a morass of content filters.

It’s true that Web sites already have to do with a multitude of proxies and other middleboxes on the open Internet anyway, but the difference is that if users don’t like what an ISP does to their packets, they can walk with their feet. There is no such option when the middlebox is mandated.

Making it BetterLess Bad

If the Government persists in mandating these filters (again, I’m just looking at the technical side here!), there are a few things that they can do to help, including:

One final thought. What will the Government’s reaction be once sites start deploying protocols like SPDY, which are going to be much less amenable to filtering, but much more powerful? Will we block them completely, thereby shutting ourselves off from the rest of the world?

Filed under: HTTP Politics Web


anthony baxter said:

As I noted elsewhere though, the hybrid solution will break all HTTPS for a site with any address on the hotlist. It also forces everyone to use the ISPs DNS, or else they can trivially avoid the DNS poisoning that is needed (for obvious reasons, doing this by IP address would be completely useless).

And as telstra's report noted, their filters are completely unable to handle anything like a high volume site. All it takes is a single URL from dailymotion, youtube or the like to cause the internet to break for everyone - since the proxies will fall over, and the traffic routed to them will drop onto the floor.

Wednesday, December 16 2009 at 3:09 PM +10:00

Sam Johnston said:

I was referred to your post by a comment on mine on the same subject:

In particular note that there is precedent for filters in the UK causing severe problems for large sites like Wikipedia and as such this should be better addressed.

Also, civil liberties aside, the performance impact of filtering on cloud computing is worth investigating. Take for example Google, who use shared infrastructure for all of their services. Uploading content to docs, sites, etc. will certainly force other services through the filters - and that's not even taking into account services which provide multi-tenant platforms like app engine and

I wouldn't buy cloud computing solutions from Iraq or China (for technical reasons alone) and I'd think long and hard before buying them from Australia now too.


Thursday, December 17 2009 at 4:36 AM +10:00

Mordd said:

Will it be neccessary for us to use software like TOR ( or only to use open Proxies already available to avoid going through the governments proxies?

Thursday, December 17 2009 at 1:33 PM +10:00

Creative Commons