Re: Second Last Call: <draft-hammer-hostmeta-16.txt> (Web Host Metadata) to Proposed Standard -- feedback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/21/11 11:08 PM, Mark Nottingham wrote:
> Generally, it's hard for me to be enthusiastic about this proposal,
> for a few reasons. That doesn't mean it shouldn't be published, but I
> do question the need for it to be Standards Track as a general
> mechanism.

How about publishing it on the standards track but not as a general
mechanism (i.e., why not clarify when it is and is not appropriate)?

Clearly, both service providers (Google, Yahoo, etc.) and spec authors
(draft-hardjono-oauth-dynreg-00, draft-hardjono-oauth-umacore-00) have
found hostmeta somewhat useful in certain contexts.

RFC 2026 says:

   A Proposed Standard specification is generally stable, has resolved
   known design choices, is believed to be well-understood, has received
   significant community review, and appears to enjoy enough community
   interest to be considered valuable.

and:

   Usually, neither implementation nor operational experience is
   required for the designation of a specification as a Proposed
   Standard.  However, such experience is highly desirable, and will
   usually represent a strong argument in favor of a Proposed Standard
   designation.

The spec seems to be stable at this point, it's received significant
review, people seem to understand what it does and how it works, it's
had both implementation and operational experience, and it appears to
enjoy enough community interest to be considered valuable in certain
contexts. I also think it has resolved the design choices and solved the
requirements that it set out to solve, although you might be right that
it doesn't solve all of the problems that a more generic metadata
framework would need to solve.

As a result, it seems like a fine candidate for Proposed Standard, i.e.,
an entry-level document on the standards track that might be modified or
even retracted based on further experience.

> Mostly, it's because I hasn't really seen much discussion of it as a
> general component of the Web / Internet architecture; AFAICT all of
> the interest in it and discussion of it has happened in more
> specialised / vertical places. 

Again, perhaps we need to clarify that it is not necessarily a general
component of the web architecture, although it can be used to solve more
specific problems.

> The issues below are my concerns;
> they're not insurmountable, but I would have expected to see some
> discussion of them to date on lists like this one and/or the TAG list
> for something that's to be an Internet Standard.
> 
> 
> * XRD -- XRD is an OASIS spec that's used by OpenID and OAuth. Maybe
> I'm just scarred by WS-*, but it seems very over-engineered for what
> it does. I understand that the communities had reasons for using it
> to leverage an existing user base for their specific user cases, but
> I don't see any reason to generalise such a beast into a generic
> mechanism.

As discussed in responses to your message, XRD seems to have been an
appropriate tool for the job in this case. Whether XRD, too, is really a
general component of the web architecture is another question.

> * Precedence -- In my experience one of the most difficult parts of a
> metadata framework like this is specifying the combination of
> metadata from multiple sources in a way that's usable, complete and
> clear. Hostmeta only briefly mentions precedence rules in the
> introduction.

That could be something to work on if and when folks try to advance this
technology to the next maturity level (currently Draft Standard).

> * Scope of hosts -- The document doesn't crisply define what a "host"
> is.

This seems at least somewhat well-defined:

   a "host" is not a single resource but the entity
   controlling the collection of resources identified by Uniform
   Resource Identifiers (URI) with a common URI host [RFC3986].

That is, it references Section 3.2.2 of RFC 3986, which defines "host"
with some precision (albeit perhaps not "crisply").

> * Context of metadata -- I've become convinced that the most
> successful uses of .well-known URIs are those that have commonality
> of use; i.e., it makes sense to define a .well-known URI when most of
> the data returned is applicable to a particular use case or set of
> use cases. This is why robots.txt works well, as do most other
> currently-deployed examples of well-known URIs.
> 
> Defining a bucket for potentially random, unassociated metadata in a
> single URI is, IMO, asking for trouble; if it is successful, it could
> cause administrative issues on the server (as potentially many
> parties will need control of a single file, for different uses --
> tricky when ordering is important for precedence), and if the file
> gets big, it will cause performance issues for some use cases.

It would be helpful to hear from folks who have deployed hostmeta to
hear if they have run into any operational issues of the kind you
describe here.

> * Chattiness -- the basic model for resource-specfic metadata in
> hostmeta requires at least two requests; one to get the hostmeta
> document, and one to get the resource-specific metadata after
> interpolating the URI of interest into a template.
> 
> For some use cases, this might be appropriate; however, for many
> others (most that I have encountered), it's far too chatty. Many use
> cases find the latency of one extra request unacceptable, much less
> two. Many use cases require fetching metadata for a number of
> distinct resources; in this model, that adds a request per resource.
> 
> I'd expect a general solution in this space to allow describing a
> "map" of a Web site and applying metadata to it in arbitrary ways, so
> that a client could fetch the document once and determine the
> metadata for any given resource by examining it.

That sounds like good input to a more generic approach. Many discovery
protocols do seem to have an inherent chattiness about them, but you
might be right that ideally the round trips could be kept to a minimum.

> If hostmeta is designed for specific use cases and meets them well,
> that's great, but it shouldn't be sold as a general mechanism. So,
> I'm -1 on this going forward as a standards-track general mechanism.
> I wouldn't mind if it were Informational, or if it were
> Standards-Track but with a defined use case.

Again, I think some clarifying language about the applicability of the
technology is in order.

> Apologies for giving this feedback so late in the process; I knew
> hostmeta was there, just didn't have time to step back and think
> about it.

And my apologies for taking so long to reply.

Peter

-- 
Peter Saint-Andre
https://stpeter.im/


_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]