Caching When You Least Expect it
Tuesday, 24 February 2009
There’s a rule of thumb about when a HTTP response can be cached; the Caching Tutorial says:
If the response’s headers tell the cache not to keep it, it won’t.
If the request is authenticated or secure, it won’t be cached.
If no validator (an ETag or Last-Modified header) is present on a response, and it doesn’t have any explicit freshness information, it will be considered uncacheable.
And, generally, this is true; most implementations won’t both caching something that doesn’t have either explicit freshness or a validator, because these responses can’t be reused except in very unusual circumstances; effectively, they use the lack of this information as a heuristic to avoid “polluting” their cache with responses that won’t be used.
This is so prevalent, in fact, that it’s developed into a bit of common wisdom; it’s easy to think that if something doesn’t have explicit freshness (e.g., a Cache-Control: max-age
or Expires
header) or a validator, it won’t be cached, ever.
Except…
This generalisation isn’t completely accurate. HTTP’s caching section is confusing, to put it kindly. However, it does clearly say that a cache can store anything that doesn’t have a no-store
directive; from 2616:
Unless specifically constrained by a cache-control (section 14.9) directive, a caching system MAY always store a successful response (see section 13.8) as a cache entry, MAY return it without validation if it is fresh, and MAY return it after successful validation. If there is neither a cache validator nor an explicit expiration time associated with a response, we do not expect it to be cached, but certain caches MAY violate this expectation (for example, when little or no network connectivity is available).
The real constraints in HTTP’s caching model are when a stored response can be reused. However, there are some pretty big allowances given for calculating heuristic freshness and using stale responses when the origin server isn’t contactable. This usually hasn’t been an issue, because as it says above, most caches won’t bother storing this kind of response anyway.
Enter ISA
It turns out that one does, and that common wisdom is wrong. Microsoft’s ISA server — commonly deployed at enterprises, including Microsoft, of course — does indeed cache these kinds of responses.
Which means that it can and apparently will store a response like this:
REQ: GET /my-personalised-home-page/ HTTP/1.1
REQ: Host: www.example.com
RES: HTTP/1.1 200 OK
RES: Content-Type: text/html
RES: Connection: close
RES:
RES: — my personalised HTML content here —
Note the lack of explicit freshness information and validators, as well as the absence of anything that tells a cache that this can’t be reused. Now, it won’t reuse it prolifically, but HTTP does allow its reuse it in a number of situations, including when the origin server looks like it’s down (e.g., a network failure).
So, in a nutshell, if you serve personalised Web pages without any caching metadata (like above), expecting them not to be cached, you may be surprised.
What does this mean?
I’m sure some people will try to paint this as ISA server being evil or a bad citizen. In fact, it’s the opposite; they’re following the agreed-upon standard for HTTP, and exposing a feature that I’ve had people ask for explicitly (and recently), being frustrated with other cache implementations that don’t store some responses. In fact, in my experience ISA server is one of the better (read: more HTTP conformant) cache implementations out there.
However, if you publish personalised content on the Web, it does mean you need to think carefully about caching. The caching model in HTTP wasn’t designed with Cookie authentication in mind. If you assume that no validators and no freshness means no caching, you could be caught out, badly.
This simplest way to fix this is to set a Cache-Control: private
directive on all personalised responses; that way, shared caches know not to reuse it, while browser caches can still, so that user experience isn’t impacted. Cache-Control: no-store
also works, but it will avoid the browser cache as well.
There are a number of other tricks that you can play, but that I wouldn’t recommend on the open Internet; e.g., using Vary: Cookies
won’t do much good. Using different URIs for different users is more Web-friendly (and still the best technique for back-end caching), but probably not too useful in the common case, because you still have to address the risk of someone else going looking through the cache for other people’s content.
Moving Forward
For me, the most interesting part of all of this is what it means for the caching model in HTTPbis. I spend some time with the editors late last year in sunny Orange County, trying to untangle the caching model while they diligently edited the other parts. That work hasn’t been published yet, but the upshot was that there are many parts that are poorly specified, sometimes even conflicting with itself.
One of the assumptions that I tentatively made in cleaning things up was that only stale responses could be reused in such circumstances, but obviously I’ll need to revisit that now. The challenge moving forward is going to make the caching model easier to comprehend without breaking existing implementations, based on their actual behaviour rather than general assumptions like the one above.
And, of course, I need to update the Caching Tutorial.
6 Comments
rs mohan said:
Tuesday, February 24 2009 at 8:13 AM
Roy T. Fielding said:
Tuesday, February 24 2009 at 10:10 AM
Mark Nottingham said:
Tuesday, February 24 2009 at 10:26 AM
Steve Souders said:
Thursday, February 26 2009 at 3:29 AM
Mark Nottingham said:
Thursday, February 26 2009 at 7:00 AM
Mark Nottingham said:
Thursday, February 17 2011 at 9:40 AM