Re: [arch-d] deprecating Postel's principle- considered harmful

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

Having read through this long thread, I want to add one
perspective, very nearly contemporaneously with when the
robustness principle was first enunciated, that complements
several of the comments that have been made, especially by Rich,
Joe, and Jari.

When one is designing and writing standards in almost any field
that involves significant complexity, there is an almost
inevitable choice between: 

-- trying to specify every possible detail so that an
implementer knows what to do in every case and for every design
choice, including what is supposed to be done about each variety
of deviation from the standard by others.

-- specifying the normal cases and general philosophy of the
design or protocol and trust that, with a bit of guidance,
implementers who are reasonably intelligent and acting in good
faith will sort things out correctly. 

We know a few things about the first option.   It causes the
standards development process to take much longer than the
second.  Because standards designers are not infallible, it is
prone to errors, errors that could make the standard completely
unworkable.  For at least some protocols (or devices), it may
not even be possible unless there are complete and deployed
implementations around that can be examined and analyzed,
preferably in combination, in trying to build the standards.  

The IETF, its predecessors, and the Internet community more
generally, have tended to pick the second option.  As someone
else pointed out, the three-stage standards process was intended
as a mechanism to be clear about what we did and didn't know and
what we were prepared to pin down and insist on, with Proposed
being close to "preliminary specification for testing and
evaluation" and an assumption that no enterprise in its right
mind would deploy an implementation based on a Proposed Standard
into large-scale production.  In the context of that choice, the
robustness principle is a guideline about how to think about and
deal with the unanticipated cases and the cases that were not
considered important enough to specify in detail, enough detail
to cover "what happens if someone else sends bad stuff".  

It is not robust (sic) against grossly sloppy behavior,
stupidity, or fools, much less damn fools.  Any implementer who
effectively says "I know I sent you garbage, but the robustness
principle obligates you to accept it an cope" has missed the
point and deserves, at least, community derision.  Those aren't
problems with the principle itself, much less grounds for
deprecating it.  They might be arguments for specifying a bit
more (and, again, that is where the three-stage, or at least the
two-state, standards process were supposed to help).

By contrast, if one does want to deprecate the principle,
realize what that is likely to do to increase the requirement to
specify _everything_ exactly and, in the presence of almost
never using even the two-stage version of the standards process,
the requirement to get everything right the first time.  People
are already complaining about how long it takes the contemporary
IETF to develop and finish a standard (in some cases taking work
elsewhere as a result); consider the impact of an increase in
that time.  

Certainly we've seen enough abuses in the name of the robustness
principle that some clarification, especially about
applicability and context, are in order.  But those who want to
throw it away should start working on the IETF's epitaph.

best,
     john






[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux