Re: Things that used to be clear (was Re: Evolving Documents (nee "Living Documents") side meeting at IETF105.)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I largely agree with Nico's take here.

Ignoring labelling for a moment, in a number of WGs (HTTP, TLS, and
QUIC) we have found it necessary to have full implementations and
large-scale deployments quite early in the design process, long before
anyone thinks that the document is done.

Common practice has become something like this:

1. Once the draft has become complete enough that it's possible to do
   so, start doing test implementations based on labelled versions
   of the draft and try to interop those.

2. Once the draft has become complete enough that it's believed to be
   safe, do coordinated field trials based on identified versions of
   the draft. This may include, for instance, deploying to a large
   fraction of the user base (e.g., all Firefox Beta users or a
   fraction of the Firefox release population) and/or to a big server
   farm (e.g., Cloudflare's users). We use version identifiers that
   are associated with the draft number to avoid interop problems.

3. Once the document is published at PS, switch over to the final
   version (which is usually nearly identical to the last draft version).

This does not really fit into the PS/DS model: We absolutely need
to deploy early versions to find potential issues (this was critical
to TLS 1.3) but we all know that those documents don't meet the PS
bar as they may have known defects or at least open issues. There's
also no real use for Full Standard. The market doesn't care about it
and the people who are doing the work either are (1) fixing/extending
the protocol (using the process above) or (2) have moved onto some
other protocol.

It's not clear to me how well this Evolving Documents proposal would
fit into such a model [0], but I thought a field report on what is becoming
common practice might be useful.

-Ekr

[0] The real need I find is to be able to make minor fixes to the
documents (mostly editorial errata or clarification of points on which
there was consensus) without re-spinning the RFC, which people don't
have the energy for.


On Wed, Jul 3, 2019 at 10:24 PM Nico Williams <nico@xxxxxxxxxxxxxxxx> wrote:
On Wed, Jul 03, 2019 at 09:52:03PM -0400, Keith Moore wrote:
> On 7/3/19 9:30 PM, Andrew Sullivan wrote:
>
> > > difficulties.    It used to be clear that you didn't deploy implementations
> > > based on Proposed Standard, but people did anyway.
> > When was that "clear"?
>
> Probably I was thinking of RFC2026 section 4.1.1, last paragraph:
>
>    Implementors should treat Proposed Standards as immature
>    specifications.  It is desirable to implement them in order to gain
>    experience and to validate, test, and clarify the specification.
>    However, since the content of Proposed Standards may be changed if
>    problems are found or better solutions are identified,/deploying
> implementations of such standards into a disruption-sensitive environment is
> not recommended./
>
> But of course that's not stating it as strongly as I remembered, and the
> problem of deploying implementations based on Proposed Standard existed even
> before that.   I remember a flap about telnet implementations circa 1992 in
> which implementations of a certain option didn't interoperate - one vendor
> followed the PS text and all of the others implemented it in the opposite
> way, and I heard a lot of people saying "they shouldn't have deployed at
> Proposed".

In the security area just about all major Internet protocols are at
Proposed Standard.  PKIX?  Proposed Standard.  Kerberos?  Ditto.  TLS?
Yup.  SSHv2?  Indeed.  IKEv2?  No, IKEv2 and CMS are among the
exceptions, though what good IKEv2 might do anyone w/o ESP, or CMS w/o
PKIX, I don't know.

Whatever the intention originally might have been, it's certainly long
not been the case that one should not deploy protocols that are at
Proposed Standard.

And it's very difficult to stop vendors from shipping pre-RFC protocols..
We don't have a protocol police, and we move too slowly.  If we don't
adapt, other SDOs will do more of our work.  A big selling point of the
IETF is its review processes -- the adults in the room to keep authors
from doing dreadful things.  But we need to speed up the cycle somewhat,
and one way to do it might be to have a way to indicate expected
stability in I-Ds, and probably only in WG work items only, and at some
cost (e.g., early directorate reviews?).  I don't quite know -- maybe
after reflection we might conclude we shouldn't do this, but we should
certainly discuss it, and be able to discuss it.

Nico
--


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux