Jari, Henning,
Altho long, this thread has been educational for me. Even better,
this recent thread moves towards solution mode.
Let's explore the idea that incentives are needed to encourage
senders to tighten-up, rather than permanently rely on the
receiver's liberality. Henning's responsible disclosure example is a
good one, but it relies on a lot of human intervention at both ends.
The EDNS flag day example was also effort-intensive.
Perhaps we need something between Error and Warning, that says
"Warning, Error imminent". Then, if the developer of the sender
doesn't stop relying on the liberality of the receiver after a
stated deadline, the receiver will stop being liberal, and
communication will subsequently fail (or incrementally degrade, or
publicly disclose the developer's laziness, etc).
The receiver might need to hold per sender state, but a single
per-receiver deadline would also be possible (effectively automating
a per-receiver flag-day).
These warning responses would have to be defined as part of the
protocol (not necessarily when it was first defined). But how to
collect the warnings from deployed implementations would be up to
developers and operators - and they would have the incentive to work
this out. The rationale is to minimize the work for good
implementations, and bounce as much of the work as possible onto
lazy developers.
At first I thought that an implementer would be unlikely to write
the necessary warning code, cos they don't care as much about
protocol purity as protocol designers do. However, if their own code
is getting increasingly complex, because it's having to handle all
the unexpected cases, they will have an incentive to encourage the
remote ends to tighten up their act.
Bob
On 08/05/2019 14:58, Henning
Schulzrinne wrote:
To a different degree, the security community has
struggled with this in the "responsible disclosure" discussion.
The incentives are different, but there seem to be three best
practices that might generalize: (1) a well-documented mechanism
to reach the right part of the organization, beyond the general
support mechanism; (2) a reasonable time period to address the
problem; (3) public disclosure after that time period.
A related problem is that it is often difficult for
developers to get advice and help. If they are lucky, they
find the IETF mailing list, but it may no longer be active or
such questions may be seen as out-of-scope. We had the
sip-implementors@ list for a while, and it generally seemed
helpful. But this requires somewhat-experienced people willing
to help relative newbies and does not scale well. Interop
events (plugfests and the like) are also helpful, particularly
if these include torture tests.
I
find myself agreeing with many of the posts, but maybe Warren
put it most nicely and compactly: "The principle should be
applied judiciously (sometimes it isn't), not discarded."
And having seen some of the
too-big-to-demand-compliant-interoperability situations that
Paul and others mention, I’ve also felt the pain :-)
I find it interesting to compare the situation to analogue
situations in other kinds of systems. For instance, I always
like to program in defensive style, where my software is as
brittle as possible to catch errors. Yet, for delivered
software, one often switches to maximum survivability, and
often your software components can at least attempt reasonable
recovery from many situations that shouldn’t have occurred.
More modern software development practices combine development
with monitoring, feedback, instrumentation, tracking of new
versions in small populations before enlarging their usage,
etc. That is kind of a best of both worlds setup, because you
can make things survivable but you’ll be receiving real-time
feedback on what’s actually happening in the field, and can
take corrective action.
This is obviously usable also in our protocol world, but
there’s a problem. You can quite easily get a lot of feedback
on your own thing working well. You can also get feedback
about who you are having problems with. But it isn’t easy to
send that feedback to where it might actually belong in many
cases: the other implementation’s maintainers. Disconnecting
or 4xxing is the crudest form of signal. Yes, sometimes
effective. But it also an action between two parties (me as an
implementor of my thing and you as an implementor of the peer)
that hurts a third party: the user..
I wish there was some other option between the silent
obedience and the refusal to talk. But I don’t know what it
could be.
Jari
_______________________________________________
Architecture-discuss mailing list
Architecture-discuss@xxxxxxxx
https://www.ietf.org/mailman/listinfo/architecture-discuss
_______________________________________________
Architecture-discuss mailing list
Architecture-discuss@xxxxxxxx
https://www.ietf.org/mailman/listinfo/architecture-discuss
--
________________________________________________________________
Bob Briscoe http://bobbriscoe.net/
|