[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
My take on shared feed lists
No doubt I'll regret stepping into this fracas, but I'm trying to wrap
my brain around the protocol/format. Since we fetch over 31k feeds every
hour, and will want to support whatever comes of this, maybe walking
through what we'd have to do in support of this would be educational. At
least to me. ;-) I'm guessing that other aggregators will do roughly the
same as what we'd do.
The subscription file won't change very often, so we'd only try to get
it every N days (or maybe only once). What would be involved in us
fetching that? Well, for each site, we have a normal url and a feed url,
as specified in the RSS file. Our options are to fetch the HTML from the
normal url and parse that looking for an appropriate <link> tag, or to
try to directly fetch the subscription file, using the feed url and a
standard naming convention.
Obviously, in this case, much less bandwidth will be used if there's
just a standard naming convention and we don't try to look for <link>
tags. We'd only have to fetch one file.
I'm not saying that's necessarily the best solution. A standard naming
convention of course has its drawbacks, and all in all I'm undecided
whether I like it or not. But if we are to discard or deprecate a
standard naming convention, I would *strongly* suggest augmenting the
various RSS protocols so that a feed can specify the location of this
file. That way, we still get the benefits of only having to fetch one file.
Thanks,
Mark
--
Mark Fletcher
Bloglines
http://www.bloglines.com