Hi Carsten,
At 11:46 09-08-2011, Carsten Bormann wrote:
For another perspective on this, see section 2.7 "The fallacy of
perfection" in "Garrulity and Fluff".
(http://www.iab.org/wp-content/IAB-uploads/2011/04/Bormann.pdf)
That's an interesting document. From Section 2.1:
"The worst source of complexity, however, is the need to appease
stakeholders."
As the draft points out, the cost of achieving consensus is to accept
changes even though it only serves to make the protocol more
complex. Some of these changes are not even implemented when they
are optional or else they are only actually useful for one implementation.
Ignoring MUSTs (Section 2.3) invites long debates. Instead of having
the RFC align with implementations in the wild, the situation is such
that implementers are pressured to adhere to the RFC.
It is quite a feat to make a protocol future-proof (Section
2.5). Unfortunately, the process pushes specifications in such a
direction due to the misguided belief that a perfect protocol is an
attainable goal. The fact that specifications are initially of a
higher quality encourages less flexibility. In other words, the
committee is more reluctant to accept changes at a later stage.
Section 2.7 discusses about the tendency to ignore aspects of reality
that are unpleasant. On an unrelated note, I would like to
congratulate NAT operators for the universal deployment of NAT.
:-) The reality is whatever the IETF thinks of the principles of
Internet architecture, operators will violate those principles when
the latter conflicts with their core value; which is about making
money. In simple terms, that protocol that has been designed to do
almost everything won't gain traction if the input from operators is
not taken into account.
Quoting Section 2.7:
"In the IETF, the desire for high quality often leads to a struggle
at the end to convince the IESG to accept the result. See "appeasing
stakeholders", "big design up front", "check-mark requirement" above
for some of the results. It may be better for a protocol to leave a
flank wide open, and look for the actual requirements to be fulfilled
in its actual use, then to design in a half-hearted solution appeasing
the blocking IESG member that then quickly becomes a piece of fluff
when it is replaced (or, worse, over-painted) by the real thing. (The
stick of keeping a protocol "experimental" instead of "standards track"
is then often used to push those "solutions" through.)"
As much as I agree with the above, it in unconceivable that
draft-bormann-core-6lowpan-fluff-minus would be considered for
publication in the IETF Stream as such an uncomfortable truth might
be deemed as unacceptable.
Five years ago, a BCP was published to describe the best current
practice for a widely deployed Experimental protocol. The working
group "came up with modifications to the protocol that the WG thought
made it better but that implementors didn't see any reasons to deploy".
This thread was initially about DKIM Signatures now being applied to
IETF Email. Some people from the IETF sausage factory are aware that
DKIM is broken; i.e. DKIM signatures will fail to verify when a
message goes through a mailing list. Some people might call that a
flaw, others might say that it is by design. The point is that it is
not possible to address all cases. As Nathaniel Borenstein put it,
can we accept the inevitability of a flawed process that lets a few
bugs get through?
These multi-year "big design up front" efforts favor high quality of
documents at the expense of timeliness. The longer the design effort
takes, the shorter the incremental benefit.
Regards,
-sm
_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf