Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 6/13/2017 7:28 AM, heasley wrote:
> Tue, Jun 13, 2017 at 05:10:41AM -0400, John C Klensin:
>> --On Tuesday, June 13, 2017 03:38 +0000 heasley
>> <heas@xxxxxxxxxxxxx> wrote:
>>
>>> Mon, Jun 12, 2017 at 06:29:30AM -0700,
>>> internet-drafts@xxxxxxxx:
>>>>         Title           : The Harmful Consequences of
>>>>         Postel's Maxim
>>>> https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01
>>> Perhaps instead of requiring two implementations for a
>>> protocol draft to proceed to rfc, it should first or also have
>>> a test suite that
>>>
>>>         ... fails noisily in response to bad or undefined
>>> inputs.
>>>
>>> Having a community-developed test suite for any protocol would
>>> be a great asset.

I am not a great fan of the test suite approach. I saw that used in OSI
protocols back in the days, and there are multiple issues -- including
for example special-casing the test suite in the implementation.

To come back to Martin's draft, I think two points are missing. One
about change, and one about grease. First, the part about change. The
Postel Principle was quite adequate in the 80's and 90's, when the
priority was to interconnect multiple systems and build the Internet.
Writing protocols can be hard, especially on machines with limited
resource. Many systems were gateways, interfacing for example Internet
mail with preexisting mainframe systems, and the protocol
implementations could not hide the quirkiness of the system that they
were interfacing. The principle was a trade-off. It made development and
interoperability easier, by tolerating some amount of non-conformance.
As the draft point out, it also tends to make evolution somewhat harder
-- although that probably cuts both ways, as overly rigid
implementations would also be ossified. Martin's draft advocates a
different trade-off. It would be nice to understand under which
circumstances the different trade-offs make sense.

Then there is grease, or greasing, which is a somewhat recent
development. The idea is to have some implementations forcefully
exercise the extension points in the protocol, which will trigger a
failure if the peer did not properly implement the negotiation of these
extensions, "grease the joints" in a way. That's kind of cool, but it
can only be implemented by important players. If an implementation has
0.5% of the market, it can try greasing all it wants, but the big
players might just as well ignore it, and the virtuous greasers will
find themselves cut off. On the other hand, if a big player does it, the
new implementations had better conform. Which means that greasing is
hard to distinguish from old fashioned "conformity with the big
implementations", which might have some unwanted consequences. Should it
be discussed?

-- Christian Huitema






[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]