Re: deprecating Postel's principle - considered harmful

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, May 10, 2019 at 10:53 AM Doug Royer <douglasroyer@xxxxxxxxx> wrote:
On 5/10/19 7:05 AM, Eric Rescorla wrote:
>
> On Thu, May 9, 2019 at 9:17 PM Doug Royer <douglasroyer@xxxxxxxxx
> <mailto:douglasroyer@xxxxxxxxx>> wrote:
>
>     Sometimes I find mild bugs in the other endpoints implementation. So I
>     tweak my code to accept their bug when I recognize their
>     implementation.
>     Networking code is full of 'bug compatibility switches'. Hopefully
>     fewer
>     over time.
>
>
> I think this graf does a good job of bringing out the key point which I
> take Martin to be pushing on. As you say, it would be nice to have fewer
> bug "bug compatibility switches".
>
> One way to achieve that is for the initial implementations to be very
> strict about rejecting violations of the specification.Then, when new
> implementations are introduced into the ecosystem, if they do not
> conform to those aspects of the specification and are forced to
> implement the protocol correctly. .....

Great idea, until the other end is the most prolific implementation and
their bug fix cycle is maybe a year away. So, sometimes you have to
accept some noise to get to play.

Yes, I made precisely this point in my next sentence.

"By contrast, if the early implementations are very loose in their enforcement of protocol violations, then it is likely that some set of implementations will also violate the specification in the data they transmit, with the result that future implementations will have to implement "bug compatibility switches" in order to be able to function correctly."

There seem to be two main equilibria:

1. There's a critical mass of very strict implementations; in this case it's very hard for a non-conformant implementation to enter the ecosystem.
2. There's a critical mass of non-conformant implementations; in this case it's very hard for a strict implementation to enter the ecosystem.

Once you're in one equilibrium or the other, it's very hard to get out.

 
And who's interpretation of 'strict'? That is where bugs show up. They
worked very hard to write those specs, and others worked very hard to
understand them and implement them. Then oops a misunderstanding.

You're certainly right that spec contain ambiguities (and as MT suggests, having the thing everybody refers to be something that can't be updated to remove those ambiguities doesn't help). However, my experience with the TLS 1.3 process is that most of the nonconformance we had wasn't actually ambiguities but just implementation defects where the protocol specification was pretty clear.  Also, at least in QUIC/TLS, etc. the early adopters coordinate pretty well, so they mostly agree on how to interpret the protocol.

-Ekr


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux