Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



--On Thursday, June 15, 2017 10:44 +1200 Brian E Carpenter
<brian.e.carpenter@xxxxxxxxx> wrote:

> On 15/06/2017 08:20, Joel M. Halpern wrote:
> ...
>> I would be very unhappy to see us take the lesson from cases
>> where we  were sloppy to be that we should tell everyone to
>> have their  implementations break at the slightest error.
> 
> Indeed. We need implementations to be as robust as possible.
> That means careful thought, both in the specification and in
> every implementation, about how to handle malformed incoming
> messages. There's no single correct answer, as I am certain
> Jon would have agreed. Some types of malformation should
> simply be ignored, because the rest of the message is valid.
> Others should cause the message to be discarded, or should
> cause an error response to be sent back, or should cause the
> error to be logged or reported to the user. There is no single
> correct solution.
> 
> Clearly the Postel principle was intended as general guidance.
> 
> Looking at the core of the draft:
> 
>       Protocol designs and implementations should fail noisily
> in       response to bad or undefined inputs.
> 
> that seems a very reasonable principle for *prototype* and
> *experimental* implementations, and a lousy one for production
> code, where the response to malformed messages should be much
> more nuanced; and the users will prefer the Postel principle
> as a fallback.

+1 and exactly right.

It is also a good principle for independently-developed test
suites.  Indeed, I agree with 

--On Wednesday, June 14, 2017 22:27 +0000 heasley
<heas@xxxxxxxxxxxxx> wrote:

>> I would be very unhappy to see us take the lesson from cases
>> where we  were sloppy to be that we should tell everyone to
>> have their  implementations break at the slightest error.
> 
> That is the suggestion of the draft; i suggest only that a
> test suite should follow this - be devilishly rude - about the
> slightest error.

But I don't see that as a contradiction: test suites that are
developed by third parties reading standards, deciding what
should be tested, and being brutally narrow about requirements
may be very useful in developing good implementations.  The type
of reading of standards such developments that encourages can
also, if used as feedback, improve the standards.    The problem
arises when test suite (and/or certifications) are produced or
endorsed by the standards developer.  If they are endorsed in
that way, they become alternate statements of the standards
themselves and Joe's observation:

> Not necessarily, but it does negate the need for a
> specification.
> 
> "A designer with one set of requirements always knows what to
> follow; a designer with two is never sure" (a variant of "a
> person with one watch always knows what time it is...")
> 
> If you have a test suite, that singularly defines the protocol
> to the exclusion of the spec, otherwise you don't have a
> complete suite.

Applies.  If they are independently developed, they are just
like another implementation: two implementations either
interoperate or they do not; an implementation may conform to
the test suite or not but the standard is the standard and the
authority.  It is possible for the text suite to simply be in
error and for things to conform to the test suite but not the
standard.  It is the two sets of requirements that is the
problem.

The I-D notwithstanding, very little of that has anything to do
with the robustness principle, which, as Brian suggests, is
intended for production implementations and much less so for
experiments, demonstrations, prototypes, reference
implementations, etc.   Certainly the robustness principle has
been misused by sloppy or lazy implementers to claim that they
can produce and send any sort of garbage they like and that
recipients are responsible for figuring out what was intended,
but that was never the intent and (almost) everyone knows that,
including most of the sloppy/ lazy (or arrogant) implementers
who would probably behave that way whether Postel had ever
stated that principle [1]. I suggest that one of the reasons the
Internet has been successful is precisely because of sensible
application of the robustness principle.   Not only do things
mostly work, or at least produce sensible error messages or
warnings rather than blowing up, in the presence of small
deviations or errors, but (in recent years at least in theory)
it avoids out having to spend extra years in standards
development while we analyze every possible error and edge case
and specify what should happen.  Instead, when appropriate, we
get to say, when appropriate, "this is the conforming behavior,
if you don't conform, the standard has nothing to say to you,
but you should not depend on its working".  The robustness
principle is important guidance for those edge cases 

That assumes a higher level of thinking and responsibility on
the part of implementers than may be justified in the current
world, but I suggest that the fix is not to abandon the
robustness principle.  Personally, I'd like to see more
litigation or other sorts of negative reinforcement against
sloppy implementations and implementers that cause damage.
YMMD, but there is lots of evidence that standards that try to
cover and specify every case are not the solution (see below).

Finally, because I don't want to write a lot of separate,
slightly-connected, notes...

--On Tuesday, June 13, 2017 14:28 +0000 heasley
<heas@xxxxxxxxxxxxx> wrote:

>> Actually, a number of standards bodies have found, to their
>> chagrin, that test suites that are developed and.or certified
>> by the standards body are a terrible idea.  The problem is
>> that they become the real standard, substituting "passes the
>> test suite" for "conformance to the standard" or the IETF's
>> long
> 
> reference?

Sorry, but I don't have time to do the research to dig that
material out and some of what I found quickly has "members only"
availability.   I speak from a "been there, done that"
perspective, with a background that includes standards
development and oversight body leadership roles in ASNI SDOs,
ANSI, and ISO.  Most of my first-hand experience on that side of
things involved programming languages (including observing a
certain one named after a famous early woman programmer with a
three-letter first name that came, in practice, to be defined
entirely by the test/conformance suite) and, closer to IETF's
interests, some OSI-related protocols in which the effort to
specify ever case led to specifications and profiles that either
were never finished or that became so cumbersome as to be
unimplementable.  I won't claim that is what killed OSI and let
the Internet succeed instead (I think there is a rather long
list of contributors), but it was one element.

best,
    john

[1] I note that we have had some worked examples of software
vendors who have taken the position that they are so important,
or that their ideas have achieved sufficient perfection, that
their systems don't need to conform to standards and that
everyone else, including the standards bodies, just need to
conform to them and whatever they produce.  That behavior can't
be blamed on Postel either, nor can the consequences if the IETF
decides to go along and adjust the standards.

> 
>    Brian
> 







[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]