Keith, It seems that your objections to this proposal are based on a very different view of what constitutes a "resource" to that which is understood in circles where URIs are commonly used. Some edge-cases may have been a matter for debate, but a good working approximation is "anything that can be identified by a URI". Spurred by XML and related technologies (which I assert are far more than mere "fashion") we are seeing URIs used for a wide range of purposes which are not constrained by a requirement for dereferencing. The use of URIs for identifying arbitrary things is now a fact of life, and in some technical domains is providing to be extremely useful. You claim "harm", but I recognize no such harm. (I don't claim that dereferencing isn't often desirable, just not always necessary. And the DDDS work provides a framework for defererencing URNs that doesn't critically depend on W3C-style management of http-space.) Having different syntactic contexts in which names are used will inevitably lead to different syntactic name forms. I submit that the real challenge here is not to prevent the use of varying syntax, but to lock the various syntactic forms to a common semantic definition -- in this case, providing a way to create syntactic URI forms that can be bound to protocol semantics in a way that inhibits semantic drift between the different forms. One of the motivating factors in this work (for me, at least, and I think for others) has been to draw together some of the divergent strands of thinking that are taking place in the IETF and W3C. W3C are fundamentally set on a course of using URIs as a generic space of identifiers. IETF have a number of well-established protocols that use registries to allocate names. Neither of these are going to change in the foreseeable future. So do we accept a Balkanization of Internet standards efforts, or do we try to draw them together? A particular case in point is content negotiation. The IETF have prepared a specification for describing media features that uses a traditional form of IANA registry to bind names to features. In parallel with this, W3C have prepared a specification which has some similar goals, but which uses URIs to represent media features, and relies on the normal URI allocation framework to ensure the minting of unique names as and when needed. (I have some reservations about this, but that can't change what is actually happening.) This URN namespace proposal will provide a way to incorporate the IETF feature registry directly into the W3C work, in a way which is traceable through IETF specifications. Without this, I predict that the parties who are looking to use the W3C work (notably, mobile phone companies) will simply go away and invent their own set of media features, without any kind of clear relationship to the IETF features. Creating a way to avoid that seems like a big win to me. I also observe that IETF and W3C operate against somewhat differing background assumptions: the IETF focus on wire protocols means that the context in which a PDU is processed is well-understood, pretty much by definition of the protocol. We have protocol rendezvous mechanisms and state-machines and synchronization techniques that reduce the amount of explicit information that is needed to be exchanged between parties -- this is all part of efficient protocol design. The work of W3C (and other designers working "over the stack") often depends on obviating such contextual assumptions, and in such cases the global (context-free) qualities of URIs are extremely valuable. If these layers were truly isolated from each other, this debate would probably never arise. But there is genuine leakage: client preferences depend on underlying hardware capabilities; trust decisions may incorporate protocol addressing and other information, etc., etc. This proposal to allow IETF protocol parameter identifiers to be embedded in URI space is one way of controlling information in these cross-layer interactions. Another different assumption between wire-protocols and application data formats: protocols are very binary -- either one is using a particular protocol or one is not. The years-long Internet Fax debates about adapting email for real-time image transmission made that very clear. It is not permissable to simply assume that a communicating party understands anything beyond the standardized protocol elements. And there is a very clear distinction in protocol specifications between what is standardized and what is private extension. This distinction is not so clear in application data formats, and while there may be a core of standardized data elements, it is often desirable for communities of users (or application designers) to agree some common extensions -- this is typical of how XML application formats are deployed. Using URIs as identifiers (e.g. in the case of XML, as namespace identifiers) allows for more flexible deployment of formats, avoiding the problems of "X-headers" that have for so long been a bane of IETF application standardization/extension efforts. In summary: URIs *will* be used to identify protocol parameters. The IETF cannot prevent that. What the IETF can do by supporting a particular form of such use is to try and ensure that such use remains bound by a clear, authoritative chain of specifications to the IETF specification of what such parameters mean. The harm that comes from not doing this, in my view, is that we end up with a multiplicity of URIs that mean nearly, but not quite, the same thing as an IETF protocol parameter. That outcome, I submit, cannot be good for longer term interoperability between IETF and other organizations' specifications. Responding to some specific points in your message: At 10:01 PM 7/2/02 -0400, Keith Moore wrote: >If we're going to do anything like this at all (and I realize that >XML advocates really want something like this), we should: > >a) at least define what it means to resolve such URNs, and ideally > set up an initial resolution system for them. (i) what it means to resolve a URI will depend on what that URI denotes. (ii) Done. See DDDS. >b) limit the protocol parameters to which it applies to those > which are justified by some use case, rather than applying > them to all protocol parameters. Done. The proposal requires RFC publication for new sub-namespaces. >c) make it clear that it is NOT acceptable to use those URNs > as substitues for the actual parameter values specified > in a protocol specification. Done. (That, surely, is a matter for the protocol specification concerned: I'm not aware of any IETF specifications that allow URIs where registry values are expected.) >d) embed NO visible structure in the URNs - just assign each > parameter value a sequence number. people who want to use > those URNs in XML or whatever would need to look them up at IANA's > web site. I disagree. This requirement actively works against one of the motivations for using URIs in application data formats; that there be a scalable framework for different organizations and persons to mint their own identifiers. To use an identifier, one must: (i) have a framework for assigning identifier values, in such a way that it is possible by some means for a human to locate its defining specification. I can't see how to do this without exploiting a visible syntactic structure in the name. (ii) have a framework for actually using the identifier in an application: in this case, I agree that the identifier should generally be treated as opaque. Also, I think (d) contradicts your goal (a): I cannot conceive any scalable resolution mechanism that does not in some sense depend on syntactic decomposition of the name. #g ------------------- Graham Klyne <GK@NineByNine.org>