Re: call for ideas: tail-heavy IETF process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just a few points...

Michael Richardson <mcr+ietf@xxxxxxxxxxxx> writes:

> I'll repeat what has been said repeatedly in the newtrk and related
> discussions.  The step from ID to "RFC" is too large because we are
> essentially always aiming for "STD" rather than "PS".

> If we are unwilling to bring "RFC" back to a place were it does not
> equal STD, then we need to create a new category of document which
> amounts to "fully baked ID".  Things like IANA allocations could occur
> at that point.

> In the days of dot-boom it was not unusual to see widespread
> implementation very early in the ID process and even interop and
> experimental deployment.   While this still occurs for quite a number of
> things (and sometimes it's a problem that the ID can not be changed as a
> result), there is an equal amount of "wait for the RFC to come out".

I suspect there is a bit of rose colored reminiscing of history.

The world has changed, significantly.

For example, there has been massive consolidation in industry. There
simply are fewer entities doing implementations today. 15 years ago,
it was common to see a half dozen or more implementations from
different (often smaller) entities, even before a document got out of
a WG. Nowadays, the cost of doing an implementation (for a company) is
in some sense much higher, and business realities make companies
*very* selective in what they implement and put into a product.

I suspect that the notion that if the IETF published documents more
quickly industry would implement them more quickly/frequently is
wishful thinking.

Michael Richardson <mcr+ietf@xxxxxxxxxxxx> writes:
> It's what Carsten said.

> 1) this idea is baked enough for cross-area review to make sense.
> 2) the protocol is not going to change significantly, one could
>    implement.
> 3) any future changes need thus to take into account impact on
>    existing implementations... BUT that doesn't mean that we can't
>    change things.

Like it or not, 2 and 3 are contradictory. From an interoperability
perspective, the difference between "not change significantly" and
"change a lot" is irrelevant once you have deployments. Change (in the
behavior or wire format) is change and breaks interoperability, no
matter how big or small.

Hannes Tschofenig <hannes.tschofenig@xxxxxxx> writes:

> b) There is no interest to research where delay really happen.

I don't think that is true. Jari has pointed to his work. I think
there is actually quite a lot of understanding of where the delays
are. But fixing them is really really hard. Blaming the "tail end" or
"the IESG" for the problems has always been the easy target. The
reality (IMO) is that the place where improvements are needed is
within the WG and on having authors be more responsive. But history
suggests there are no easy fixes here.

Randy Bush <randy@xxxxxxx> writes:

> > A basic question, then, is whether we think these absolute numbers and
> > these proportions of time are reasonable and appropriate for the IETF
> > to be/remain effective?

> seems pretty reasonable to me.  from personal experience, the iesg and
> rfced add useful value.

+1

Like everyone, I wish things moved more quickly, but every attempt
I've ever seen that tries to speed things up ends up reducing the
quality or having some other undesirable side effect.

The bottom line is that getting a good spec requires iteration, and
reviews from a broad set of parties. That is best done
*sequentially*. And given the limited cycles the (volunteer) community
as a whole has, you can't easily change these dynamics. We've seen
many attempts to reduce the overall time by trying to overlap reviews
and steps, but that has the side effect of losing *sequential*
*iteration* (where each new review reviews the previous set of
additions/changes). IMO, overlapping steps is dangerous and leads to
poor quality.

> being in a many-year process of getting a technology through the
> saussage machine, it's the wg that feels to me to be the most
> inefficient part of the process.  i attribute this to email not being
> the best medium, and meetings being too short and too far between.  but
> that is purely subjective.

If you want to speed up the process, focus on how to *increase* the
amount of iteration you get (i.e., revised drafts) while at the same
time *reducing* the time between revised drafts.

If you look at the delays documents encounter (both in WG and in IESG
review), the killer is long times between document revisions. Focus on
understanding the *why* behind that and what steps could be taken to
make improvements.

And finally, a big reason the IESG review is where things happen is
that it *really* is the last time one has to verify that a document is
ready to go. With the limited cycles we all have, there will always be
a tendancy to not deal with a document until the point of "this is
your last chance". Nothing like a *real* deadline to motivate people
to finally deal with a document, before it's too late.

Thomas





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]