Re: I-D ACTION:draft-klensin-iana-reg-policy-00.txt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sigh. We keep hitting the same fundamental, philosophical, disagreement here. Your wording may, however, leave room for compromise:

I think everyone agrees that clear documentation must exist, and be stable. If not an RFC, a standards document from another organization, etc.

For limited option spaces, "stewardship" is needed, but what does this mean? The document(s) being revised/created should give some guidance to the community on how to use "percentage of space free" and/or "rate of consumption" to make reasonably consistent choices.

Then there is "Technical Review"; I actually agree that a level of such review is required, but is must be LIMITED (unless of course the protocol itself is being IETF reviewed). In a case like the one that triggered this debate, the review needs to include only:

1. If the option appears in a packet, will there be any possible negative impact on a network element that has no code to process the option.

2. If option space is limited, does the documentation suggest that deployment will actually take place (lets not assign limited codes to high school science projects).

That is it! I fundamentally disagree with "There's every reason that the same standard should apply to specifications developed outside the IETF exactly as to IETF documents" for the simple reason that it is non-enforceable. Beyond stewardship of limited code point space, I see no justification for the IETF having veto power over standards being developed to use public standards like IP. The fact that such independent developments are possible is at the heart of the success of the Internet.

Do we really think there is or will be a rush to standardize IETF-like protocols in TIA, ITU, etc.? I don't think so unless the IETF really falls on its face as far as cranking out well-engineered solutions.

No, the little bit of "competition" will come in cases like this one, where a protocol is designed for a corner case in an organization with expertise for that corner case. Again, as has happened here, the IETF probably does not have the right set of expertise to fully review the protocol, and it should not feel the need to do so.

It seems that you want a review of "is this protocol safe to deploy on the Internet"? I can see the reasoning behind that, but I think the code point assignment review is the wrong place. Assume for a moment that the outside request passes all the criteria above (well documented, no shortage, code point can be safely ignored by non-participating elements). Lets further assume that we (IETF, IESG, ...) "don't like" the protocol, or have insufficient expertise/time to evaluate it. You are in fact suggesting that the code point assignment be denied until the IETF can find the time and be convinced that the protocol is "good". In this case that does not make sense, because it does very little to keep the protocol from being deployed. It will just get re-engineered to not need a code point. At that time the purists in the IETF will congratulate themselves for having fended off the dragon, while the Internet operators have to cope with a "stealth" version of the very protocol the purists tried to stop, instead of being able to filter on a known option number.

What if, in the case above, the code gets assigned along with publication of an RFC that in fact says that the code in question "belongs" to another organization and represents a non-IETF protocol that operators should filter unless they understand the implications of carrying these packets. There could in fact be one such RFC that any code assignment of this type points to. Now the Internet is actually safer, and there is an incentive for authors of protocols that were intended for wider use, because they will actually have to run the entire protocol through the IETF to get off the "black-list". In fact I am suggesting (assuming no real code point shortage) that it is better to have a visible list of "assigned, but should not be generally used" codes than to assume that a non-assignment decision will keep the application off the Internet.


On Jul 13, 2005, at 10:57, Brian E Carpenter wrote:

I'm on the side of fairly rigorous review in these constrained spaces.
With the experience of the Larry Roberts request, I actually think
RFC 2780 is too lax - it would be better if IETF Review (in rfc2434bis
terminology) was required for option numbers.

Contrary to what I understand the present draft to mean, I think
that  for some very critical namespaces, such as IP header fields,
that may have fundamental impact on packet flows, a technical
review of the proposed usage of the parameter is *always*
required before an assignment, regardless of scarcity.

Clarity of definition is *not* enough to justify a registration;
we also need to agree as a community that the proposed usage will
not be a cause of collateral damage to the Internet. There's every
reason that the same standard should apply to specifications
developed outside the IETF exactly as to IETF documents.

Hans Kruse, Associate Professor
J. Warren McClure School of Communication Systems Management
Adjunct Associate Professor of Electrical Engineering and Computer Science
292 Lindley Hall, Ohio University, Athens, OH, 45701
740-593-4891 voice, 740-593-4889 fax

_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]