Re: Alternate entry document model (was: Re: IETF processes (wasRe:

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andrew Sullivan wrote:
> 
> One way -- the one you seem to be (quite reasonably) worried about --
> is that you get a bunch of non-interoperating, half-baked things that
> are later rendered broken by updates to the specification.

No, I'm more worried about ending up with a set of half a dozen
specs for any _new_ technology that are considered a valid "standard"
for _ALL_ vendors to implement and ship rather than one single document.

If there is only one "proposed standard", then it is much easier
for me as an implemetor, especially if I am _not_ an early adopter
to _not_ implement certain stuff shipped by some early adopters,
and it is up to the early adopters to fix broken implementations
that they've aggressively shipped without sufficiently planning ahead.
Standardization only makes sense if there is convergence on ONE
standard, rather than a half a dozen variants of every spec.


>
> Note that this method is actually the one we claim potentially to have;
> the two-maturity-levels draft does not change that.  The idea is that you
> are supposed to try things out as early as reasonable, and if there
> are significant changes, they'll turn up when an effort is made to
> move things along the maturity track.

There are early adopters of new technogies.
There are early adopter which ship defective implementations of early specs.
There are early adopters which recklessly ship implementations of early
specs with no sensible planning ahead for potential changes in the
final spec and how to deal with it _gracefully_ (which in some cases
means distributing updates to the entire installed base).

Personally, I do not have a problem with early adopters in general,
but only with those that do not plan ahead -- i.e. build an implementation
that is (a) highly likely to be fully and flawlessly interoperable with the
final spec and (b) plan for updating *ALL* shipped products when the spec
is finalized (which is more about the date of IESG approval than the
date when the RFC is issued).


> 
> Some people have argued in this thread (and the other related ones)
> that there is a problem from the IETF attempting to prevent the
> problem you're talking about.


The IETF can not stop early adopters from doing stupid things,
but the IETF can refuse to create a mess instead of a single
proposed standard for later implementors of the protocol.
Documenting characteristics of early-adopter implementations
in an appendix (as well as non-negligable breakage) would
also be helpful to later implementors.


For every aggressive early adopter implementor within the IETF
there are 10 more thoughtful and conservative implementors outside
of the IETF, and it is really them for which the IETF should be
producing the standards (as well as for those who want to create
a successor protocol version and a transition/migration plan
to the new version.


>
> That attempt, which is not documented anywhere, amounts to a high
> bar of difficulty in getting an RFC published at all.

Getting a good and consistent document published is likely easy.
Obtaining a good and consistent document can be quite difficult
when you start with a very thin outline and make this thin outline
a highlander proposal by making it a WG document early in its pity shape
and subject every change to "WG consensus" change control.
If there are not enough engineers and not enough implementation
experience involved in the WG discussion, document progression
may become quite slow.


An approach that might work much better would be that a vendor
that is very interested in a particular technology starts documenting
his prototyping work early on as an I-D and announces that I-D in
the relevant working group -- and continues improving the I-D
and the prototype while discussing ideas.

I really think the bigggest problem in today's IETF is that
vendors are walling off their engineering from the IETF
WG discussion too much, and that is the root cause of the problem.

An IETF working group is not a crowd of highly skilled engineers
that is just sitting there twiddling their thumbs and waiting for
opportunities to start implementing new proposals and have fruitful
discussions on running code.  If vendors do not involve their
engineers in the IETF standardization work, then WG discussions
have a much higher likelyhood to become heated, contentious and
unproductive at a purely theoretical level.


>
> I'm not actually sure I have the empirical data to support a claim
> that it really is harder to get that initial-version RFC published;
> but people seem to think that it is, anyway.

Converging on the initial proposal for a new technology is harder
than revisioning an existing technology.  And there is experience
with the installed base, even for those that are otherwise
participating discussion free of implementation experience.


> 
> The argument in favour of publish-early, revise-often approaches is
> that iterations will, or ought to, improve things.

Only if the early implementations are completely ditched as often as
the document is revisioned.


Look at the Linux Kernel versions of the "Enterprise" Linux Distros.
If your target market is business customers and the useful lifetime
of your software is 10+ years, maturity is much more important
than bleeding edge bells and whistles.


> Imagine: in some
> other possible world, they're up to IPv10 now, but it took those
> intervening versions to discover that you really really needed some
> clean interoperation layer with the "legacy" IPv4 networks.

My impression is that we are around IPv10 -- not with the official protocol
version numbering, but with the concept revisions how to make it work
and usable for the public internet.


-Martin
_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]