[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [syndication] site-wide metadata discovery



> Chad's suggestion is ingenious, creative. But robots.txt is generally
> considered a bad design (as outlined in previous posts), and remains as a
> kind of erratic from the early days.
>
> OPML is a format based on XML, but as it can be adapted at will it can't be
> validated like most XML languages. Anything you could do with OPML you could
> do with a purpose-built XML language, without the loss of this facility.
>
> So it's possible to implement autodiscovery by combining the two? Great. It
> shows how versatile our systems have become.
>
> But I really don't understand why you would choose to build a system from
> the worst raw materials available.
>
> Might it not be worth considering *good* practice for a little while?

Wouldn't that be shocking.  I'd venture some folks would fall over dead if they
started doing it that way.

I think the goal here is to find as many ways as possible to productively get
people with content plugged into ways to expose it's existence.

If people have the ability to use <link> tags then we ought to encourage it.  If
people can't, and several learned participants here are saying they know places
that can't then we're going to find a reliable and extensible means for them to
play along.

As was brought up to me by another list member, another important thing to
consider is delegation.  Let's make sure this tool offers a way for the people
running one layer of a site can clearly hand the ball off to another layer.
That should help avoid issues with scalability while using the admittedly weak
robots.txt file as the starting place.

-Bill Kearney