mark nottingham

Are Resource Packages a Good Idea?

Thursday, 18 February 2010

HTTP

Resource Packages is an interesting proposal from Mozilla folks for binding together bunches of related data (e.g., CSS files, JavaScript and images) and sending it in one HTTP response, rather than many, as browsers typically do.

Intuitively, this seems to make sense; less HTTP requests is good, right?

Maybe, maybe not. AFAICT, there aren’t any metrics comparing RP vs. traditional sites (has anyone done this?). In any case, a few concerns come to mind about this approach to making the Web faster.

Packaging and the Web

RP doesn’t have any generic metadata mechanism. The files in a RP are just that — bags of bits, whereas on the Web, we work with representations that include metadata.

So, clients will have to sniff the media type on each individual package member — something we’re trying to get away from on the Web. And, forget about using other types of header-based metadata as well.

For example, the draft points out that you can “even use ETags to invalidate the zip file when needed”. However, if a cache already has existing entries, the only things linking them are the URLs; since zip files don’t care much more metadata than the modification time, a cache doesn’t know their ETags.

Much better would be a generic, Web-centric packaging format, like MIME Multipart or Atom. It’s true that ZIP tools are more prevalent, but I’d be surprised if that would be a barrier once browsers deployed another format; when that happens, developers tend to fill the gaps quickly.

Getting Granularity and Ordering Right

Another concern I have is that Web sites are complex, and it’s difficult to choose exactly what to package up and what to leave separate.

The effects of packaging up too much could be profound; for example, a site that doesn’t use every bit of JavaScript and CSS on every page, but puts them all in a package, will cause a client to download more than it needs to start working on a given page.

While that isn’t a big deal if you’re sitting on a fat connection with a fast computer in your office, it matters when you’re across the world, or just browsing across a mobile network.

It’ll also create a lot of duplication in proxy and accelerator caches, since clients that don’t use RP will request the same things separately.

Likewise, if you don’t order the items in your package as the browser needs them, it will have a negative impact on performance, because the rendering engine will end up sitting around waiting for a required asset to come down the pipe. In effect, it’s enforcing head-of-line blocking on every response contained in the package.

I strongly suspect that choosing the right package granularity and ordering is going to be a very difficult and performance-sensitive task, and for many sites the interdependencies between JS, CSS and images will burn a lot of developer time tweaking packages.

The worst case is that some RP-enabled sites will resemble a Flash site from a UX perspective; one big serial download with a “waiting” graphic, followed by snappy performance. I don’t know about you, but I hate that.

Working with TCP

Finally, RP seems to be built on the argument that using fewer TCP connections is better. While it’s true that the browsers currently limit a page to six or eight connections, and any connections over that queue up, this is a) changing (see this issue), and b) not necessarily a bad thing.

It’s not (necessarily) bad because of TCP slow-start. As pointed out by Google and many others, a brand-new TCP connection’s throughput is fairly restricted until it has a number of round trips, and congestion (e.g., buffers in intervening routers filling it up) can slow it right back down again.

In other words, downloading 20 10k assets across eight parallel connections is often faster and more reliable than downloading one 200k asset over one connection. Browsers — intentionally or not — exploit this by using multiple parallel connections.

As such, putting all of your data eggs in one basket (as it were) can actually slow you down, never mind the ordering and granularity issues discussed above.

So, is RP a good idea?

Well, it’s certainly an interesting one, and in some cases — e.g., when you have a lot of very small assets that you know you’er going to use — it makes a lot of sense.

However, as it sits I don’t see any numbers quantifying a benefit (again, please correct me if I’m wrong!), and the existing recommendations (“serve all the resources… required by a page in a single HTTP request”) are a bit worrisome.

Putting that aside, this doesn’t feel like a long-term solution; it’s more of a band-aid over one set of specific problems in the 2010 Web.

I’m pretty biased towards a long-term solution here, because the cost of deploying clients is so high. While it’s true that more aggressive solutions like SPDY require both client and server support, server support isn’t hard to get once it’s in clients, and RP requires client support anyway.

So, I’d put forth that if we’re going to go to the effort to change clients, we should get the most bang for our buck, and make sure it lasts. Just my .02.


18 Comments

Anne van Kesteren said:

Agreed that SPDY looks like a much nicer solution than this. It is also a lot more complex though.

Thursday, February 18 2010 at 10:34 AM

kinkie said:

It would seem to me that this is just a matter of reinventing the wheel. Persistent connections will do just that, with all the metadata and so on.

The only thing I’d consider interesting is to use and/or define some non-javascript method of hinting the user-agent that some extra resources are going to be used early in the main HTML data-stream, e.g. by using some form of the HTML tag and/or the Link: HTTP header.

This would have many advantages:

  • improve cacheability
  • preserve metadata
  • let user-agent optimize the network flows as they see fit
  • let page authors define dynamically what they need (it is an optimization)
  • avoid extra decoding steps in the data-fetch phase(a .zip has to be unpacked)

Thursday, February 18 2010 at 11:57 AM

Jos Hirth said:

It’ll also create a lot of duplication in proxy and accelerator caches, since clients that don’t use RP will request the same things separately.

Clients which don’t know what RPs are won’t download them.

[…] one big serial download with a “waiting” graphic, followed by snappy performance.

Using an archive doesn’t mean you have to wait for the whole thing. If the archive isn’t solid, each file is compressed individually and then the whole thing is glued together.

Also, Deflate (zip, gzip, swf, png, etc) uses a sliding 32kb window, which means you can start decompressing as soon as you got at least 32kb. After that you can continue decompressing which every byte you get.

The real problem with using Zips is that it’s a rather unsuitable format for this stuff. Using some default format is of course a really nice thing, but the support of those features, which are very important in this context, is generally very poorly supported by all tools I know.

E.g. UTF-8 file names are possible (according to the specs) but there is very little support for this feature. The trailing index is very handy when it comes to updating huge archives, but this sensible design decision is completely retarded in the RP context. Also, changing the order isn’t supported in general, since it conflicts with those considerations which led to the trailing index.

The goal to use a default format is noble, but it’s misguided. What we actually need is a format which can be easily written with existing standard libraries. A custom format which uses a leading header and Deflate for compression would fit the bill just fine. Writing such a file wouldn’t be much harder than writing a Zip and all those silly limitations would be gone.

Putting that aside, this doesn’t feel like a long-term solution; it’s more of a band-aid over one set of specific problems in the 2010 Web.

I also wondered about this. We already got CSS, JS, and the document itself covered. They can be merged (CSS/JS that is), minified, and gzipped. 3 connections and one for the favicon, I’m fine with that.

What’s missing now are images. There are content images, which should be kept separately either way and loaded on demand. And then there are all those tiny layout images. Using sprites is a major pain in the rear. PRs or something similar would be awesome for that.

However, CSS gets more and more options for procedural graphics. Multiple borders, rounded corners, text-shadow, box-shadow, gradients, and whatever. Which leaves us with maybe one image for the logo and one very simple sprite sheet with a few icons.

I’d really like using RPs right now. But in a couple of years? I’m not sure if I’d need them that badly then as I do now.

SPDY on the other hand has a lot of potential to improve, well, everything. CSS3+ and RPs will/would only affect the front-end performance of a few sites. Whereas some under the hood thing like SPDY will find it’s way to about any site sooner or later.

Friday, February 19 2010 at 1:01 AM

Steve Souders said:

The general problem being addressed is the overhead of multiple HTTP requests. “Overhead” includes TCP slow-start, (repeated, uncompressed) HTTP headers, and delays from handling request/response sequentially (and more). There are multiple solutions being proposed: resource packages, SPDY, and pipelining are the most discussed. Each of these viewed in isolation have benefits and I’m in favor of them. But what we really need to do is to evaluate them together. If we had SPDY, the need for resource packages is less. If pipelining worked, that would mitigate the impact of SPDY. You get what I’m saying. I’d enjoy a blog post from you Mark (or Arvind - hint hint) comparing at least these three alternatives and identifying the tradeoffs and feasibility of implementation and adoption.

Friday, February 19 2010 at 2:50 AM

Brian Smith said:

A “solid” archive is one in which the deflate window is not reset for each file in the archive (e.g. tar.gz, tar.bz). A zip file is not solid; each file is compressed independently. A “solid” archive will almost always be compressed more compactly.

Friday, February 19 2010 at 6:21 AM

Dorian Taylor said:

Do resource packages consider content negotiation?

Friday, February 19 2010 at 7:05 AM

Kris Zyp said:

I agree, it seems like SPDY has to potential to solve numerous other problems (effective pushing) and preserve metadata/caching in a high-performance low-latency package.

Friday, February 19 2010 at 7:09 AM

Alexander Klimetschek said:

Regarding SPDY [1]:

Interesting approach but at a glance I see two problems:

  • the speedup measurements are in the range of 20-40%, good, but not really worth the complexity IMO, especially when improvements to HTTP might come

  • it breaks the fundamental stateless constraint for HTTP (at least based on the points that it allows multiple parallel requests per connection and caches “static” headers); even if it is only a library in front of the server that opens normal HTTP requests internally, to support existing servers, I see potential for congestion here; it probably can be solved, but their custom in-memory server used for performance tests is far from the reality

All in all I see no easy migration step to SPDY (maybe only for httpd and other “static” webservers), and in the meantime things might have improved otherwise. (Though this is far from expert knowledge, just a gut feeling)

[1] http://www.chromium.org/spdy/spdy-whitepaper

Friday, February 19 2010 at 8:21 AM

Martin Lierschof said:

There a lot of questions to consider on the servers and on the clients part, which i would file under “system specific”. Which and where all of them should be measured.

For example have a look at the server part which serves static files: In linux there are a few different methods to server static files especially the read part, which bugs every httpd developer (a good comparison for lighty using writev, linux-sendfile, gthread-aio, posix-aio, linux-aio-sendfile is here http://www.lighttpd.net/benchmark/). My concerns especially in adding a complex level like pipelining, generating checksums, compressing, decompressing and so on is for what cost on server and client side. Even though there are *n different system combinations out there consuming thus information.

Is there a formula yet which includes this? A formula for clients like: (“take a iphone” + “take a ubuntu on kernel basis 2.0 workstation with i5 and isdn and use conquerer” + “take a windowsxp with amd xp 500 and use firefox 2 with dsl 1,5 mbit” + “take a arm based smartphone with symbian and hsdpa and opera version x”) / n consumers * by mb consumend sqaured by pipes used / by current connections and so on. I guess not! And this may need to be done first, i would love to see google, mozilla or any working group start serious projects to evaluate more details about these facts. The fact that this huge amount of data evalutation is missing for proposals and for people builing them. These poor guys just can try!

Sure i missed the fact of new features.

so who’s gonna start this?

Friday, February 19 2010 at 9:01 AM

Martin Lierschof said:

There a lot of questions to consider on the servers and on the clients part, which i would file under “system specific”. Which and where all of them should be measured.

For example have a look at the server part which serves static files: In linux there are a few different methods to server static files especially the read part, which bugs every httpd developer (a good comparison for lighty using writev, linux-sendfile, gthread-aio, posix-aio, linux-aio-sendfile is here http://www.lighttpd.net/benchmark/). My concerns especially in adding a complex level like pipelining, generating checksums, compressing, decompressing and so on is for what cost on server and client side. Even though there are *n different system combinations out there consuming thus information.

Is there a formula yet which includes this? A formula for clients like: (“take a iphone” + “take a ubuntu on kernel basis 2.0 workstation with i5 and isdn and use conquerer” + “take a windowsxp with amd xp 500 and use firefox 2 with dsl 1,5 mbit” + “take a arm based smartphone with symbian and hsdpa and opera version x”) / n consumers * by mb consumend sqaured by pipes used / by current connections and so on. I guess not! For sure there is no formula for all this. But the fact is that this huge amount of data evalutation is missing for proposals and for people builing them. I would love to see google, mozilla or any working group start serious public projects to evaluate more details about these facts.

Sure i missed the fact of new features.

so who’s gonna start this?

Friday, February 19 2010 at 9:06 AM

rob yates said:

There’s something to the proposal given how prevalent CSS Sprite http://www.alistapart.com/articles/sprites usage is. We use CSS Sprites and have seen real improvements in response time. This proposal would appear to be much easier to use.

Friday, February 19 2010 at 11:26 AM

Jos Hirth said:

@Mark

Then intermediary caches will have both the RP version and the “normal” version in cache.

Yes, indeed. I just didn’t understand what you meant right away. My bad.

[…] if the contents of the package aren’t in the right order, it may block rendering.

Well, it’s possible to add them in any order you like. However, usually it’s effectively random. You transverse through the directories recursively and add files in the order you get them (typically in the order they were created - which means the order changes if you do a rollback).

If you write some library which helps you to prioritize those files, you could as well write a lib to write a custom (Deflate based) format. There really isn’t anything to gain by using Zip files.

Using archives can be pretty neat though. But it should be a format which was created with this very specific use-case in mind. (Leading header, UTF-8 file names, mime types, progressive loading should work, etc.)

Also, Zip is a rather messy format with many edge cases. Half of the spec would be completely pointless, but should browsers support that stuff anyways? Do we really need Deflate64, BZip2, LZMA, and PPMd? The Zip format supports all of those. There might be lots of submarine patents lurking in the dark. Deflate isn’t all that good by today’s standards, but we can be sure that it’s 100% patent free. There are also at least 2 ways to encrypt files and you can split archives across several volumes. Doesn’t really make much sense in this context, does it?

Faster Web sites shouldn’t be just for those folks with the time and resources to obsessively tweak.

I agree. Even if I sorta prefer it if my pages perform quite a bit better than those of the competition. ;)

Saturday, February 20 2010 at 1:07 AM

Leen Besselink said:

“I also wondered about this. We already got CSS, JS, and the document itself covered. They can be merged (CSS/JS that is), minified, and gzipped. 3 connections and one for the favicon, I’m fine with that.

What’s missing now are images.”

For images we also already have the data-urls.

Sunday, February 21 2010 at 12:06 PM

Bill de hOra said:

@leen

For images we also already have the data-urls.

33% blowup, sucks for the network. Helps make the page uncacheable.

@mnot

Coming at this from a mobile network angle. Phones can’t always spin up multiple connections that don’t block the UI. I hear regular issues with Brew for example and J2ME being fairly limited.

In effect, it’s enforcing head-of-line blocking on every response contained in the package.

Right, but so many handsets are limited in the number of connections they can spin up it sometimes doesn’t matter; they’re hol blocked on the client anyway. The goal is to avoid tcp/ip connections (as they get sequenced), reduce payload size, and if you can avoid compression/base64 that’s got some awesome in it for the battery gain. It’s like targeting 2010 browser payloads against 1998 capable clients and networks.

Much better would be a generic, Web-centric packaging format, like MIME Multipart

Right, that avoids the decompression overhead on the client for clients that want to avoid burning their battery. But multipart mime is so easy to get wrong.

or Atom

12 years of SOAP should tell us that XML sucks at packaging media. Plus Atom has too much blogging baggage for this use case (“title only css” makes no sense)

Maybe what we need is an updated multipart structure, one that supported paging or url referencing, so the server can decide to inline based on network quality or the client can ask for what it wants based on its capabilities. And one that excludes “Transfer-Encoding: chunked” gorp (again something that causes plenty of field issues for handsets that aren’t expecting it).

@martin

Is there a formula yet which includes this? A formula for clients like: (“take a iphone” + “take a ubuntu on kernel basis 2.0 workstation with i5 and isdn and use conquerer” + “take a windowsxp with amd xp 500 and use firefox 2 with dsl 1,5 mbit” + “take a arm based smartphone with symbian and hsdpa and opera version x”) / n consumers * by mb consumend sqaured by pipes used / by current connections and so on. I guess not! For sure there is no formula for all this. But the fact is that this huge amount of data evalutation is missing for proposals and for people builing them. I would love to see google, mozilla or any working group start serious public projects to evaluate more details about these facts.’s

You nailed it. No-one has much of a clue for what’s optimal across client/network/server combos. And there’s much more to the mobile web than what browsers need to render.

Tuesday, February 23 2010 at 6:02 AM

joey said:

ressource packages are an excellent way to distribute web components. i mean, a mp3 player in a single swf file is really easy to use, but having to insert multiple JS/CSS files, copy them to the server at the right place, correct some paths if needed, and eventually write some more JS is just a pain.

with a ressource package, you can embed all the JS, CSS and images in a single file. And even fire an onload to search the document for a specific tag and replace it by the component. that’s even better than a SWF file.

SPDY doesn’t address this kind of issues, it’s “just” a technical thing. Ressource Packages are more than that so they shouldn’t be judged only on the performance side.

Saturday, February 27 2010 at 2:48 AM