Second Last Call: <draft-hammer-hostmeta-16.txt> (Web Host Metadata) to Proposed Standard -- feedback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Generally, it's hard for me to be enthusiastic about this proposal, for a few reasons. That doesn't mean it shouldn't be published, but I do question the need for it to be Standards Track as a general mechanism.

Mostly, it's because I hasn't really seen much discussion of it as a general component of the Web / Internet architecture; AFAICT all of the interest in it and discussion of it has happened in more specialised / vertical places. The issues below are my concerns; they're not insurmountable, but I would have expected to see some discussion of them to date on lists like this one and/or the TAG list for something that's to be an Internet Standard.


* XRD -- XRD is an OASIS spec that's used by OpenID and OAuth. Maybe I'm just scarred by WS-*, but it seems very over-engineered for what it does. I understand that the communities had reasons for using it to leverage an existing user base for their specific user cases, but I don't see any reason to generalise such a beast into a generic mechanism.


* Precedence -- In my experience one of the most difficult parts of a metadata framework like this is specifying the combination of metadata from multiple sources in a way that's usable, complete and clear. Hostmeta only briefly mentions precedence rules in the introduction. 


* Scope of hosts -- The document doesn't crisply define what a "host" is.


* Context of metadata -- I've become convinced that the most successful uses of .well-known URIs are those that have commonality of use; i.e., it makes sense to define a .well-known URI when most of the data returned is applicable to a particular use case or set of use cases. This is why robots.txt works well, as do most other currently-deployed examples of well-known URIs.

Defining a bucket for potentially random, unassociated metadata in a single URI is, IMO, asking for trouble; if it is successful, it could cause administrative issues on the server (as potentially many parties will need control of a single file, for different uses -- tricky when ordering is important for precedence), and if the file gets big, it will cause performance issues for some use cases.


* Chattiness -- the basic model for resource-specfic metadata in hostmeta requires at least two requests; one to get the hostmeta document, and one to get the resource-specific metadata after interpolating the URI of interest into a template. 

For some use cases, this might be appropriate; however, for many others (most that I have encountered), it's far too chatty. Many use cases find the latency of one extra request unacceptable, much less two. Many use cases require fetching metadata for a number of distinct resources; in this model, that adds a request per resource.

I'd expect a general solution in this space to allow describing a "map" of a Web site and applying metadata to it in arbitrary ways, so that a client could fetch the document once and determine the metadata for any given resource by examining it. 


If hostmeta is designed for specific use cases and meets them well, that's great, but it shouldn't be sold as a general mechanism. So, I'm -1 on this going forward as a standards-track general mechanism. I wouldn't mind if it were Informational, or if it were Standards-Track but with a defined use case.

Apologies for giving this feedback so late in the process; I knew hostmeta was there, just didn't have time to step back and think about it.


Cheers,

--
Mark Nottingham   http://www.mnot.net/



_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]