mark nottingham

The State of Proxy Caching

Wednesday, 20 June 2007

HTTP Caching

A while back I wrote up the state of browser caching, after writing a quick-and-dirty XHR-based test page, with the idea that if people know how their content is handled by common implementations, they’d be able to trust caches a bit more.

The other half that they need to know about, of course, is proxy caching; depending on who you listen to, somewhere between 20% and 50% of clients on the internet are behind some kind of proxy, and since they’re operated on behalf of a network provider, rather than the publisher or end user.

So, last northern summer (yes, it’s taken that long to write this up; sorry) we tested a selection of Open Source and commercial proxy caches with the ever-so-useful Co-Advisor test suite, which is pretty much the industry standard. Each was tested in its “out-of-the-box” state; any device can be configured to do bad things, but generally administrators just switch them on.

Altogether, eight different implementations were tested. I can’t report exactly which device did what, because of the EULA restrictions on many commercial implementations. That’s OK; my goal isn’t to point fingers at any particular vendor, but rather to give people an idea of how their content will be treated once it gets onto the open Internet.

The Results

The good news is that the basics of URIs, HTTP connection handling and caching were not a problem; every implementation passed them with pretty much flying colours. When you send Cache-Control: no-cache or max-age, they’ll do the right thing, and generally they’ll parse the headers, forward them on, and return the response correctly.

The bad news is that more complex functionality is spottily supported, at best. I suspect this is because everyday browsing doesn’t exercise HTTP like more advanced uses like WebDAV, service APIs, etc.

See below for the details of the ups and downs. These are just the highlights; if you have more specific questions, raise them in comments and I’ll do my best to answer.


Every implementation was able to handle 1024 byte long request URIs, but only a few were configured to allow 8192 bytes. It’d be interesting to see what support was like around 4096 bytes, but for the time being it’s probably safe to limit your URLs to 2k or less.

HTTP Methods

GET, HEAD, POST, PUT, DELETE, OPTIONS, and TRACE all seemed to work OK, but quite a few caches had problems with extension HTTP methods. If you’re using non-standard HTTP methods (or even some of the more esoteric WebDAV methods; there are a lot of them), beware.


As discussed previously, many implementations had problems with incorrectly-formatted Expires headers; the correct thing to do was to consider them to be stale (i.e., expired in the past), but most implementations tried to guess what the ill-formatted date meant — sometimes incorrectly. If you write your own Expires headers, be very careful.


Most of the CC directives were honoured with no problem; e.g., max-age, no-store, private, must-revalidate and proxy-revalidate were all treated correctly, even when there was a conflicting Expires header present. The only standout was s-maxage; for smoe reason, many implementations had a problem correctly revalidating responses with this CC directive.

Extension Cache-Control directives (e.g., Cache-Control: max-age=60, foo=bar) seem to be handled correctly by all implementations; that is, they’re ignored. Older versions of a venerable Open Source cache do sometimes incorrectly parse such headers (e.g., Cache-Control: foo=”max-age=8” is interpreted as Cache-Control: max-age=8), but this is (hopefully) a pathological case.

One other thing to note; both the private and no-cache Cache-Control directives give you the option of listing some headers after them; e.g.,

Cache-Control: private=Set-Cookie

with the intent being that this refines the semantics of that directive to apply just to those headers. In practice, implementations just ignore responses carrying these refinements, making them effectively uncacheable.

Conditional Requests

Validation was good in the simple cases, but tended to fall down in more complex circumstances, especially in situations with weak ETags, If-Range headers and other not-so-common things.

Cache Updates

Caches are required to be updated by the headers in 304 responses, as well as responses to HEAD. For example, if you send a new Cache-Control header back with a new max-age value on one of these responses, the cache should replace the old value with the new one.

In practice, updates were spotty; a lot of the time, the test suite couldn’t get the cache into a state where it could tell, but when it could, there were failures. As a result, it’s probably not a good idea to rely on 304 responses or HEAD requests to update headers; better to just send a 200 back with a whole new representation.

Cache Invalidation

Sadly, one of the most useful parts of the caching model, invalidation by side effect, isn’t supported at all. A few implementations would invalidate the Request-URI upon a DELETE, and even fewer upon PUT and DELETE, but that’s it. As a result, it’s harder to take full advantage of the cache, because you’ll have to mark things as uncacheable if you care about changes being available immediately.


Generally, header parsing was quite good; every implementation was able to parse simple headers, forward them as appropriate, and preserve order if there was more than one instance of a single header. The only thing that really tripped them up was HTTP’s support for spreading a single header across multiple lines, like this;

Cache-Control: max-age=60,
  public, must-revalidate

From what I saw, the most common mistake was when an implementation would try to support multi-line headers, but mess it up by removing the whitespace between lines (it should be preserved). In the real world, this shouldn’t be an issue, because no-one I’ve seen generates multi-line headers. Still, if you’re tempted to, don’t.

Another issue was a propensity for a few implementations to forward hop-by-hop headers (e.g., those listed in the Connection header, plus a few pre-defined ones like Trailer, TE, Upgrade). That’s a no-no, but it shouldn’t affect most publishers.

Chunked Encoding

Of the implementations that supported chunked encoding (i.e., called themselves HTTP/1.1), most did a pretty good job. The only noticeable exception is when there’s both a Content-Length header and chunked encoding present; although HTTP forbids this situation, some servers may do it anyway, and it caused a few problems. Likewise, chunked requests had more than their share of hiccups, probably because they’re not very widely seen.


I was genuinely surprised to see that trailers don’t completely mess up most of the implementations; while there were some bugs, most of the tests passed. Go figure. Again, this shouldn’t affect real people.


There wasn’t any evidence for pipelining support, at all. A shame, but it’s not well-supported in browsers either.


Expect/Continue is a very useful facility that allows a client to figure out whether the server will take a request, based upon the headers, before sending the whole body. So, it’s a shame that the implementations tested supported it so spottily. The very simplest tests were passed by all comers, but I wouldn’t be comfortable recommending use of Expect/Continue on the open Internet today.


The Warning header is almost never generated by implementations, as far as I saw; disappointing. Don’t rely on getting warnings from caches about stale responses; it’s better to figure it out yourself (e.g., by examining the Age header). Also, don’t rely on intermediaries deleting Warning headers as directed by the RFC; only one implementation that I saw attempted this at all.


Stefan said:

Thanks for the summary here. Did not know about the args for private= controls. Hmm, I can always find something new in rfc2616.

No surprise about the unreliability of Expect/Continue. It bit me twice already. First, when implementing a servlet, there is not way to support this feature - the API does not allow it. This is especially sad when the servlet tries to do authentication (which gladly is not often the case). Second time it bit me was with .Net HTTP client which expects it to work with POST and authenticating servers. No way to switch it off it seems.

One question: do you have a recommendation for HTTP clients making “cold” POSTs against a server with possible authentication?

Wednesday, June 20 2007 at 9:14 AM

Mark Baker said:

Awesome, Mark. This will be incredibly useful to me.

P.S. I assume that the problems with pipelining you mentioned are with forwarding pipelined requests, right? That is, a client that does support it can still blast away, it’s just that the proxy will remove the benefit.

Wednesday, June 20 2007 at 11:09 AM

Mark Baker said:

P.P.S. I just noticed that I had to reload this page in order to see my last comment (in FF, suggesting that my cached representation wasn’t invalidated properly. Wonder who’s fault that was? 8-)

Wednesday, June 20 2007 at 11:12 AM

Alan Dean said:

Were there any results for use of the Vary header?

I have referenced this post on the [rest-discuss] list:

Wednesday, June 20 2007 at 11:58 AM said:

What does “No pipelining support” mean - that proxies don’t use pipelining when talking to origin servers? Does Co-Advisor test that the proxies correctly support pipelined requests from the client?

Thursday, June 21 2007 at 3:54 AM

Jim said:

Many thanks for this informative article. This is something that’s difficult for many developers to test, which makes this article all the more valuable.

I’d really like to see vendors put their products online for testing purposes, some kind of equivalent to browsercam etc. It can’t be difficult for them to do and would help interoperability quite a bit.

What were the problems with unusual HTTP methods? Were the caches returning 405 Method Not Allowed, substituting different methods, or something?

Friday, June 22 2007 at 5:05 AM

Tim Kientzle said:

I would really like to see someone publish an “Annotated RFC2616” along the lines of Herb Schildt’s “Annotated ANSI C”, which alternates pages from the standard with pages of commentary explaining the rationale, use, and issues with each feature.

Saturday, June 23 2007 at 4:19 AM

Robert Olofsson said:

Regarding multi line headers and whitespace handling: there are still many old servers that output http headers separated only by \n. So in my proxy I have a property setting to either handle more old/bad servers or to be strict http. Sadly enough that setting has to be set to allow old and bad servers. Hopefully this has changed, it was some time since I actually checked for old servers, but my guess is that the old servers are still running.

Regarding pipelining: there are many different ways to implement pipelining and Co-Advisor only tests one way. In my proxy I did an experimental pipelining that pipelined requests from different incoming connections on the same outgoing connection. Doing it that way worked, but the Co-Advisor tests did not figure it out. Since modern web browsers usually knows better when to pipeline than my proxy it is probably better to do pipelining like the client/browser does it.

Saturday, June 23 2007 at 9:19 AM

Graeme Mathieson said:

Maybe it’s just my bad luck, but I’ve been on several client sites where the cache between me and the outside world had problems with HTTP methods other than GET and POST. This would exhibit itself while trying to use HTTP subversion repositories on site, in particular, I recall, when it tried to use the OPTIONS verb?

One specific cache that exhibited this behaviour was Novell’s Border Manager. I wonder if it was poor configuration, or an inherent problem?

Tuesday, June 26 2007 at 6:38 AM

Aristotle Pagaltzis said:

The bad news is that more complex functionality is spottily supported, at best. I suspect this is because everyday browsing doesn’t exercise HTTP like more advanced uses like WebDAV, service APIs, etc.

Well that, and there is no test suite covering any noteworthy fraction of RFC 2616.

Tuesday, June 26 2007 at 10:13 AM

K said:

Stefan: If you are talking about HTTPWebRequest, please read

Monday, July 30 2007 at 10:53 AM

Henrik Nordstrom said:

There is a couple of reasons why proxy servers generally do not do pipelining.

  • Proxy servers maintain client and server connections independent of each other and it’s not obvious how to preserve the pipeline sent by the requesting client.

  • Proxy servers (and most browsers) is quite reluctant of building their own pipelines of requests as you never know how much latency this adds. An early request in the pipelining taking a long time blocks all other responses. Because of this pipelining is most useful in batch applications where response time is not very important, just total throughput.

  • Security. The response splitting problem is much easier to exploit on pipelined connections, possibly resulting in cache pollution.

Tuesday, October 16 2007 at 9:10 AM