mnot’s blog

Design depends largely on constraints.” — Charles Eames

Tuesday, 7 February 2006



Interesting; there are not one but two sessions at the upcoming ETech about taking Web applications offline.

Given the current bent of O’Reilly conferences — speed dating for VCs and their willing prey — this is a pretty sure sign that we’re going to see another startup hyped funded.

This isn’t the first time this topic has been broached, of course, but we haven’t seen too many serious efforts at it. Adam Bosworth noodled on the topic back when we both worked at BEA, but it never really went anywhere (unless he’s deep in the bowels of Google toiling away on it still).

I’d love to see something in this space, and wanted it for a long time, but I’m concerned by the tone of those session write-ups. It looks like they’re doing their best to disassociate, abstract out and generally ignore the Web in the process. Please tell me if I’m wrong.

Back when Adam blogged it, I think we convinced him that a RESTful approach would be simplest and most successful. Ideally, I’d like to see offline operation as just an extension of the HTTP caching model. That way, you don’t have to buy into someone’s application framework to get offline; you just have to get the user to upgrade their browser.

Am I crazy? Anybody want to help give it a kick? It’s OK to say “yes” to both questions.


Mark Baker said:

Depends what you mean by “extension of the HTTP caching model”. I see “offline” as manageable by a local proxy which could be built into the browser. The problems with this aren’t what I would call caching problems though, they’re basically the same problems you have with HTTP pipelining and determinism (, 3rd paragraph). Then again, I’m not that familiar with the caching model so perhaps there’s more similarities there than I think.

We did some work in this space at Idokorro in 2000/2001, but didn’t get too far because it was on the Blackberry, and it’s basically always connected.

Wednesday, February 8 2006 at 5:17 AM

Vincent D Murphy said:

I’m pretty sure TimBL is on the record as saying that requests should be logged by the user agent so a user can refer back to them at a later date. Like a ‘Sent’ email mailbox.

Taking this step would be a good start in my opinion; it also seems to be the essence of “panic-mode”, one of the sessions you linked to.

Wednesday, February 8 2006 at 5:22 AM

levin said:

There is a nice Proxy called MouseHole, that may be of interest here.

It has the ability to rewrite requests and inject Greasemonkey scripts into web pages on the fly. It is also a local HTTP server and has the ability to transparently ‘mount’ web applications to arbitrary URLs.

Wednesday, February 8 2006 at 8:09 AM

levin said:

… forgot the url:


Wednesday, February 8 2006 at 8:51 AM

James said:

” There is a nice Proxy called MouseHole, that may be of interest here. It has the ability to rewrite requests and inject Greasemonkey scripts into web pages on the fly. “

Actually, it executes Ruby code against the page, on its way from the proxy to the browser. Quite slick.

Sunday, February 26 2006 at 2:18 AM

Julien Couvreur said:

Having implemented some prototype online/offline web applications ([0] and [1]), I worry that it’s a larger problem than a caching issue. Maybe I’m just missing some details of how the local cache or proxy would work.

You could proxy and cache the POSTs when disconnected, but what if you expect a result back from that POST?

How to you restore the state of your app when you reload the page? Do you re-build it based on the queued POSTs?

Also, how to you factorize queued POSTs? For example, I do change A while disconnected, then change B, to the same document. Ideally, you’d want a single server operation when getting back online, for change A+B.

In general, I’d think that the offline logic needs to be tightly built into the app. The code needs to be very aware of the disconnected mode.

Cheers, Julien

[0] [1]

Friday, June 9 2006 at 4:50 AM