To a different degree, the security community has struggled with this in the "responsible disclosure" discussion. The incentives are different, but there seem to be three best practices that might generalize: (1) a well-documented mechanism to reach the right part of the organization, beyond the general support mechanism; (2) a reasonable time period to address the problem; (3) public disclosure after that time period.
A related problem is that it is often difficult for developers to get advice and help. If they are lucky, they find the IETF mailing list, but it may no longer be active or such questions may be seen as out-of-scope. We had the sip-implementors@ list for a while, and it generally seemed helpful. But this requires somewhat-experienced people willing to help relative newbies and does not scale well. Interop events (plugfests and the like) are also helpful, particularly if these include torture tests.
Henning
On Wed, May 8, 2019 at 7:38 AM Jari Arkko <jari.arkko@xxxxxxxxx> wrote:
I find myself agreeing with many of the posts, but maybe Warren put it most nicely and compactly: "The principle should be applied judiciously (sometimes it isn't), not discarded."
And having seen some of the too-big-to-demand-compliant-interoperability situations that Paul and others mention, I’ve also felt the pain :-)
I find it interesting to compare the situation to analogue situations in other kinds of systems. For instance, I always like to program in defensive style, where my software is as brittle as possible to catch errors. Yet, for delivered software, one often switches to maximum survivability, and often your software components can at least attempt reasonable recovery from many situations that shouldn’t have occurred. More modern software development practices combine development with monitoring, feedback, instrumentation, tracking of new versions in small populations before enlarging their usage, etc. That is kind of a best of both worlds setup, because you can make things survivable but you’ll be receiving real-time feedback on what’s actually happening in the field, and can take corrective action.
This is obviously usable also in our protocol world, but there’s a problem. You can quite easily get a lot of feedback on your own thing working well. You can also get feedback about who you are having problems with. But it isn’t easy to send that feedback to where it might actually belong in many cases: the other implementation’s maintainers. Disconnecting or 4xxing is the crudest form of signal. Yes, sometimes effective. But it also an action between two parties (me as an implementor of my thing and you as an implementor of the peer) that hurts a third party: the user..
I wish there was some other option between the silent obedience and the refusal to talk. But I don’t know what it could be.
Jari
_______________________________________________
Architecture-discuss mailing list
Architecture-discuss@xxxxxxxx
https://www.ietf.org/mailman/listinfo/architecture-discuss