On 11/7/19 8:03 PM, Michael Richardson wrote:
Keith Moore <moore@xxxxxxxxxxxxxxxxxxxx> wrote: >> Would you consider if these new documents are *RFC*s or, would you consider >> if we could make a new document series for these documents? I would suggest >> that it become the*Proposed Standard* series. That is, we'd change our >> first step to not be an*RFC*. > At first glance I find this idea appealing. I'd like to see it explored. > At lot depends on other factors - e.g. when, relative to interop testing, do > we look for potential harm to other interests? When do we do that now? At IESG review, which is what causes such significant delays.
I'm frankly not sure that cross-area review causes significant delays or even that IESG review causes significant delays. I could argue instead that what causes those delays is working groups producing documents that are not suitable for the intended status. When WGs produce broken documents that are hard to fix, this can be very time consuming for IESG. In practice, IESG is expected to find some path to approving such documents. Sometimes there's just no good fix, and even poor fixes can be quite time consuming to come up with.
In my experience, well-written, thorough specifications are generally relatively easy to review; the documents that are overly long or poorly written (so it's hard to tell what is really intended, or whether the protocol will work reliably) or that seem to be potentially harmful are much harder and more time consuming to review. So people shouldn't automatically assume that "delays in IESG" are IESG's fault.
But I will certainly agree that by the time a document gets to Last Call and after, it's usually too late to fix fundamental design flaws - and I would count failure to research and consider interests that might be harmed as a design flaw. And yet, I don't think narrowly-focused WGs can be entrusted to consider a broad spectrum of interests - that really has to be an IETF-wide review.
That's just too late in the game for an *RFC*, which we can never really take back. The property that you like about internet-drafts is that they are in principal ephermeral, or more to the point, version NN replaces NN-1, and we can revise them. That's the property you are looking for, I think.
Close but maybe not quite. Fundamentally, a specification that's being interoperability tested _should_ be an ephemeral version. The whole purpose in doing the testing is the realization that it will likely need to be changed based on implementation and testing. And the whole approval and RFC process takes too long to wait for it to be done before testing - it would basically cause a pipeline stall if people did it that way. Testing at PS might have made sense long ago when there were fewer interests at stake, fewer procedures to follow, and the publication process was lightweight - but even then I don't think the testing that was anticipated between PS and DS happened very often. Also, external conditions have changed a lot since 2026 was written. Testing over the public Internet, or setting up a private testing VPN, are more practical than they used to be. Group collaboration tools can be used effectively to coordinate tests today whereas face-to-face meetings were once considered a practical necessity. Computers are faster, have more memory, languages are better, and cloud resources can be used to simulate load. And the general expectation is that product iteration cycles are much tighter than was once the case. It makes sense to adapt IETF processes to suit these new realities.
Implementation concurrent with specification development, and testing _before_ IESG review make a LOT of sense, IMO. I think it would shake out bugs that are difficult for IESG to see, and favorable testing results would give IESG more reason to have confidence that the protocol actually works.
To me it looks like that with a judicious reordering of constraints, we should be able to make time-to-completion more predictable AND produce higher quality documents.
> (though maybe we don't need a new document series - maybe we just need a way > of designating certain Internet-Drafts as being suitable for interop testing > and/or limited deployment) That was proposed a few months ago. See: https://mailarchive.ietf.org/arch/msg/ietf/zy2l7gWR8yGRIIt5mYY5WSorqGY
The devil is in the details. I read that proposal as subtly different but in an important way. Marking a draft as "stable" is to me a very different thing than marking it as "here's a version we're going to subject to interoperability tests". It's perfectly reasonable in my mind to test portions of a protocol that there's general agreement on, even while knowing that some new features will need to be added before the protocol is ready for Last Call.
I prefer that we create a new series. Maybe it shouldn't be hosted on *RFC-EDITOR.ORG*, but I'd want the identical infrastructure used.
I'm not sure we actually disagree on this aspect, but just in case we do: I don't see why a process as heavyweight as RFC publication is needed for this. It seems to me that except for the minor change of being able to mark internet-drafts with a few specific attributes, and the ability of the tools to search for such drafts by those attributes (maybe a few automatically generated pages), the existing I-D publication process is about right. But I'd be interested in hearing specific reasons why this isn't the case.
Keith