Test suites are often quite useful.
But I can not imagine holding up an I-D from publication while we work
out a test suite.
Demonstrably, there are cases where we have had trouble because people
took divergent paths based on liberality in interpretation.
Equally clearly, the liberal acceptance approach has served us well in
many cases.
Unclear specifications (unclear RFCs) are a problem. Even Jon's
oroginal formulation was not, from what I saw, intended to permit sloppy
writing. Or sloppy implementation.
There are indeed contexts where an application calling attention to a
problem is very useful. Silently ignoring things that indicate trouble
is usually a mistake (although not always.)
I would be very unhappy to see us take the lesson from cases where we
were sloppy to be that we should tell everyone to have their
implementations break at the slightest error.
Yours,
Joel
On 6/14/17 3:56 PM, heasley wrote:
Tue, Jun 13, 2017 at 03:03:07PM -0700, Joe Touch:
Hi, all,
...
Title : The Harmful Consequences of
Postel's Maxim
https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01
I completely agree with John Klensin that a test suite defines the
protocol standard (warts and all).
This may be, even if a pair of implementations were required company,
but is that wholly negative? The test suite is not stuck in time; it can
evolve to test things that its authors had not anticipated, its own bugs,
or evolution of the protocol itself. It is also more likely to catch bugs
and protocol design flaws than interoperability testing alone.
At the very least, an open test suite allows everyone to test against a
know baseline, before they test interoperation with other implementations.
For example, a rigid test suite that fails when an reserved field is not
its prescribed value, would be an asset for future development that
allocates the field.
Certainly the mentioned JSON problems would have benefitted from a test
suite and interoperability testing. Clearly, peer review and
interoperability testing are insufficient.