--On Thursday, 17 May, 2007 12:42 -0400 Sam Hartman <hartmans-ietf@xxxxxxx> wrote: > I don't see why the standardized definition is the obvious > right place to fix things. I thought we were committed to > running code. To me, one implication of that commitment is > that sometimes the right fix is for the spec to change rather > than the implementations. > > In a terminology conflict, this often involves moving away > from the term that has two conflicting uses to terms that are > more clear. > > Ultimately cases like this should be evaluated based on > whether the final result is more clear overall. Sam, Two observations; I hope you don't think they are contradictory. (1) We regularly get ourselves into intellectual and procedural difficulties by treating specifications about how protocol specifications are written as if they were protocol specifications. When we try to avoid that, we get ourselves into worse problems. Using rules that are more or less arbitrary, we make some of these documents Proposed Standards and then try to progress them, we make others into BCPs and, now, we make still others into IONs. If we are going to standardize a definitional requirement or method -- whether it is ABNF or IPR boilerplate or something -- we need to get it right as a self-contained definition and then live with it. We should certainly revise and replace it if it turns out to be unworkable (as has happened with IPR work) or if the definition turns out to be inadequate to permit an unambiguous interpretation (that issue spills over into my second observation, below). But, once other specifications start to depend on the definitions that are there, and show those definitions to be adequate, we should not be talking about deprecating definitions unless we are prepared to "that was wrong, we need to start over (even though some of the older material may still be useful)". Again, please note the similarity to the IPR work. (2) If we pretend that the ABNF metalanguage and definitions are actually a protocol specification, then we need to evaluate it as one. Then, we have the following criteria (which we usually don't state quite this precisely): (i) Is the definition good enough that interoperable implementations are possible? (ii) Do people care enough about the construct to actually use it in ways that show it is useful? Now, neither of those rules prevents non-conforming "implementations". We may notice that those exist, but our concern is only about implementations that appear to conform to the spec and are (or are not) interoperable. If non-conforming implementations happen by accident because the text isn't clear enough, we try to clarify the text. But we don't say "well, there are non-conforming implementations, so the spec is broken". That would make no sense at all, at least to me. The answer to (i) appears to be "yes". There are lots of conforming cases. And the answer to (ii) is, as Dave as pointed out repeatedly, "about 30 years worth". Is this construction dangerous if used in inappropriate contexts? Sure. Does that justify a warning note to the unwary? Probably. Is it possible to implement other things and call them by the same name (i.e., create a non-conforming implementation)? Of course. Should that invalidate the definition? Not if we want to have anything left if the principle were applied broadly. john _______________________________________________ Ietf@xxxxxxxx https://www1.ietf.org/mailman/listinfo/ietf