[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Finding Feeds



> What I'm saying is that there needs to be an easy way to find a
> relevant feed *when you're looking at a page*. In the long run, this
> probably means through a META tag, or similar, when browsers support
> it. In the short term, it probably means a way that someone can put 
> a link on their page that says something to the effect of "feed 
> here", which, when followed, will pass that URI to their aggregator
> automagically.

(These are probably obvious points)

A common method is to put one of the [XML] icons on the page that is 
an HREF to the feed in XML format.  Of which there are several 
formats.  Mike Krus' Newisfree.com goes one better and makes a blue 
[XML] button represent a scraped (or synthetic) feed.

I agree that it would be tremendously helpful if a page had a meta-
tag on it that something 'smart' understands.  Putting another 
protocol handler into IE isn't all that hard.  Getting people to 
agree on one, well, that's the hard part. 

I wonder if anyone's done something like a soap://server:port/service 
protocol handler extension?  Being able to use a service that 
understands how to 'tell' your browser how to respond might be a 
start.

One large hurdle is independence and it's not really surmountable.  
Most sites and services actually want to remain independent.  We've 
all seen what happens when a service implodes (dejanews?).  That and 
fundamentally it's easier to program your local service if it only 
interacts with things under your direct control.  

Yeah, it's a puzzle and it definitely stands in the way of wider-
scale consumption of syndicated services.  For the forseeable future 
using portal services seems the safest route.

-Bill Kearney