mark nottingham

Cache Channels

Friday, 4 January 2008

HTTP Caching

The stale-while-revalidate and stale-if-error extensions aren’t the only fiddling we’ve been doing with the HTTP caching model. Now that Squid 2.7 is starting to see daylight, I can explain about a much more ambitious project — Cache Channels.

In HTTP, there are generally two ways to keep something in a cache fresh; using a freshness lifetime (e.g., Cache-Control: max-age), and validation (e.g., If-Modified-Since). Together, they do a really good job of making the Web seem faster and reducing load on the Internet. However, in some situations, they’re not enough.

This is because setting a TTL (the best way to get value out of a cache) requires a server to make an unsavoury bargain; they’ve got to trade away control to get efficiency. Imagine a page that’s cached for one minute; if anything on it changes, the server can change it and know that clients behind caches will see that change in a minute or less. However, those caches aren’t very efficient, especially if they see less than one request for that page in a minute! Conversely, if the page is cached for three days, the cache has more opportunity to reuse the page, but the server doesn’t have much control over it.

This case — a resource that needs the control of a low TTL, but the efficiency of a high one — is common. Often, a site will have a very diverse set of resources that get sporadic traffic and need decent control (think Wikipedia), and while popular pages will get decent cache efficiency, those lower on the curve will still incur a lot of traffic in aggregate, without being very cacheable.

Some History

This problem isn’t new. People have tried to address it in the past by coming up with “invalidation protocols”; i.e., some out-of-band mechanism for the server to tell caches that something has changed. For example, see Squid’s PURGE method, Oracle’s go as part of ESI, and the documents of the WEBI Working Group (including my straw-man).

Sending a message from the server does the job in the simple case, but when you start using it in real systems, you tend to come up against a few problems.

First of all, the server has to keep track of the caches “subscribed” to it. In some deployments this is administratively expensive or impractical; e.g., if there are caches in fifteen different departments of a company, each server in the company has to have an administrative relationship with the caches, so that they can send messages to the right place. Urgh.

Secondly, the server has to track which cache has acknowledged what message. If this isn’t done, you face the unpleasant situation of having a cache store and serve something for longer than it should, if it was down for maintenance or simply out of reach.

Finally, to make this approach work, you have to give your responses artificially high TTLs, so that the cache will store them in the first place. This is bad because if there are any caches out there who don’t get the invalidation message (e.g., browser caches, third party intermediary caches), they’ll store something for longer than you intend.

After thinking about this for a while, I decided that it might be interesting to try a different tack; rather than pushing the messages from the server, to pull them. And, rather than using the messages as invalidations of already-fresh cached responses, instead using them to extend the freshness of almost-stale content incrementally.

Cache Channels

And that’s pretty much what Cache Channels do. A few extra Cache-Control headers associate a channel with a response and allow saavy caches to incrementally extend the freshness of that response as long as two conditions are true;

1) The cache is in touch with the channel, and 2) The channel doesn’t say that the response has become stale.

My implementation uses an archived Atom feed to represent the contents of the channel, which the cache polls to stay in touch. This takes the burden of keeping track of subscribed caches away from the server, and as a bonus, since HTTP is used as the transport, the channel it self is cacheable, meaning that scaling this to a large cluster of caches can be efficient easily.

Additionally, since the knowledge about connectivity resides with the cache, the server doesn’t have to track caches that are out of touch; the cache has enough information to handle problems itself.

And, of course, this approach is more friendly to the HTTP caching model.

But Wait, There’s More…

One of the other problems when you want to manage the contents of a cache is identifying things, funnily enough; often, you don’t know the URI of the response you want to make stale.

For example, imagine that you have a people search interface. One person may have details listed in several responses (e.g., searches for “paris”, “hilton”, and “airhead” may all return information about one person), but you don’t have those URIs on hand.

Cache Channels has a ‘group’ mechanism for this case, where you can associate one or more ‘synthetic’ URIs with a response in addition to its normal request-uri. If a stale message comes into the channel for one of those URIs, any response associated with them will become stale as well.

Where are Cache Channels Useful?

These mechanisms probably aren’t terribly useful for your average Web site to implement with random caches on the Internet; a far more typical use case is for caches that are in controlled situations, such as an Intranet or a “reverse proxy.”

Also, as with anything, Cache Channels makes a number of tradeoffs; it can help a lot in certain situations (e.g., lots of URI with a low rate of change), but less so in others (e.g., a small number of URIs with a high rate of change). It’s just one more tool in the box.

Status and Next Steps

Cache Channels is specified in an Internet-Draft, and the supporting machinery is now present in Squid 2.7.

We specifically designed the core logic for it to be implemented as a Squid “helper” program, so that the protocol can be tweaked (or outright replaced) if necessary. and currently there’s an implementation in Python.

The other half of things is the machinery for actually publishing channels, which at present is a small Python script that writes static files to be served by a Web server. I suspect that in real systems, this will need to be integrated into publishing systems, etc.

As always, I’d love feedback and especially any insights you have about applicability.


4 Comments

MikeD said:

Thanks for this! I’ve only started to work on decent caching and using the existing cache control mechanisms in HTTP, but it’s nice to know that people are working on solutions for higher class scaling.

Friday, January 4 2008 at 4:19 AM

Rene Leonhardt said:

Great work!

Are the cache channels already usable with squid 3.0?

Sunday, June 15 2008 at 3:35 AM

Colin Jack said:

Definitely useful but I noticed the internet-draft has expired, any plans to revive it?

Saturday, September 19 2009 at 8:13 AM