Re: [arch-d] deprecating Postel's principle- considered harmful

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The message below (including the bits from Jari, John, and Brian) is excellent and thoughtful and I think really advances the conversation. I don’t think we should throw out the current standards process but you have put your finger on something.

--aaron

On 8 May 2019, at 23:25, Martin Thomson wrote:

On Wed, May 8, 2019, at 21:38, Jari Arkko wrote:

I wish there was some other option between the silent obedience and the
refusal to talk. But I don’t know what it could be.

I'm replying to Jari here because this pithy bit, taken out of context and - after having read John's well considered response - grabbed me. I've known this exact feeling often, and it's deeply frustrating.

On Wed, May 8, 2019, at 23:10, John C Klensin wrote:

[...] there is an almost inevitable choice between:

-- trying to specify every possible detail so that an
implementer knows what to do in every case and for every design
choice, including what is supposed to be done about each variety
of deviation from the standard by others.

-- specifying the normal cases and general philosophy of the
design or protocol and trust that, with a bit of guidance,
implementers who are reasonably intelligent and acting in good
faith will sort things out correctly.

The choice John presents here is even more fundamental. Like many others who were brought up on the IETF flavour of specification, I greatly prefer it. I find WhatWG and W3C specifications incomprehensible for all the reasons Henry articulated. They are very much in the first category. Unquestionably thorough in detail, but obstinately inhospitable to the process of extracting higher meaning. I would be the last person to advocate for following that path.

(I'm going to catch heat for saying that, so let me qualify it a little. The web platform doesn't operate in quite the same way as the IETF and their choice of specification mode, abhorrent as I find it, does work for that community. They had to deal with something of a post-robustness-principle technical debt the likes of which many people would have walked away from. What you see is the result of dedicated, meticulous work that has restored functioning interoperability to that world. The outcome is uneven, as might be expected, but it functions quite effectively. Now more so due to the use of a shared test infrastructure that we should be envious of.)

In my opinion, the mistake here is presenting this as a binary choice. The goal of teaching implementers the protocol remains - for me - the most important function of a specification. Existing implementations don't need a spec, except to argue over when there are disagreements. A spec is most helpful in serving in the creation of new implementations. The state of web specifications might be an indication that the Web has given up on that (there it is again; Web people, you have my email address :), but I believe that to be the most important function of IETF specifications.

The trap that leads to the binary choice is - again - in the assumption of immutability of specifications and the unwillingness of others to amend their errant ways. But we all come the IETF with the idea in mind that we can work together on improving things, and surely proper functioning of protocols that are important to us (and our businesses) is high on our list of priorities. If I thought that RFC 2246 were set in stone, I'd have stayed home. If I thought that other implementers were not prepared to consider changes to their implementation in response to a consensus decision on a protocol, I could have saved a lot time sitting in aeroplanes and airports.

Possibly the best thing that has come out of IETF involvement for me has been the communities I've joined as a result. When presented with an interoperability challenge, my first reaction now is never to look for a workaround. Once I determine that it isn't a local problem (i.e., my fault: a lot of the problems we see), I try contacting someone responsible for the problem. And that's worked more often than not. People are responsive to reports of problems with their systems. If we conclude that we need to change specs, we work out how to do that. As a result, I haven't built a workaround for a while now; we've even started to remove them from code, mostly as a result of better collaboration and continued protocol maintenance. (To the earlier discussion about deployment timelines, I appreciate how much this is a privilege.)

Getting back to the binary choice, this is where the immutability of the RFC series and our processes work against us. The view that the specification has to be "done" is a trap. John points at the idea of progression from Proposed Standard to Internet Standard as being the intended way to capture the progression from "probably OK" to "more concretely OK" and "almost entirely good". While that might have been the intent, I don't think that this has worked out all that well in practice, or that such a directly linear progression is always appropriate. The maturity of many specs I've worked on is not always uniform in a way that suits that model.

Our highly discrete process hasn't been good at capturing and responding to the evolving needs of a protocol. On the other hand, one of the good things with the gradual raising of the quality expectations for PS RFCs[1] is that protocols have spent more time in draft form, where changes are far more feasible. But even in TLS 1.3, where the process was unnaturally extended to deal with the side effects it had on some forms of intermediation, the publication of an RFC is not a stopping point.

If we can, over time, capture all those little pitfalls and fence them in with MUST and SHOULD and whatnot in a specification, we aren't forced to make a choice between a timely spec and a thorough one. Just start with timely and keep moving in the direction of better.

[1] What it means to be suitable for the Internet has moved in the same direction as our expectations for PS quality, which is entirely appropriate in my opinion. That doesn't really change the fundamental analysis of when to ship, just the thresholds.

On Thu, May 9, 2019, at 09:05, Brian E Carpenter wrote:

But yes, the word "tolerate" was carefully chosen. It doesn't say
"ignore buffer overflow errors" or "use heuristics to guess what the
sender meant". It means "don't throw a fatal exception when something
unexpected arrives." And it suggests silent discard unless an error
message "is required by the specification". That's entirely consistent
with the RFC1122 text that Roland quoted.

RFC 1122 is my favourite articulation of the "robustness principle", because it isn't the robustness principle as much as it is just straight up common sense. But the meme takes a different form. The meme has the temerity of demanding that I design protocols to it, to the point that a response was necessary.


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux