On 7/10/19 2:32 PM, Phillip Hallam-Baker wrote:
I agree with
much of this discussion but I have come to a very different
perspective:
* Internet
Standard status means nothing more than the fact that the
legacy deployment is sufficiently large that further
development of the specification is no longer feasible.
* Internet
Standard status is not necessarily a desirable condition.
* Internet
Standard status will be reached regardless of whether the
documents are good or not.
Without arguing about the above, I would be okay with a single
level + periodic review (say, once every 2 years) resulting in an
applicability note on the RFC Editor's page for that RFC. The
applicability note would note interoperability issues, deployment
level, identify known significant bugs and technical omissions,
and make deployment recommendations. (whether suitable for
widespread deployment or not, perhaps only in specific corner
cases, perhaps legacy use only, perhaps a recommendation to
discontinue use or transition to a different protocol).
If we did things that way then maybe also task the WG that
produced the RFC with doing interop tests (or arranging for them)
within 2 years of publication. We desperately need the feedback
loop.
One of the
sad parts of the development of the Web is that HTTP became a
standard in early 1993. By the time we had a team able to
start getting the protocols into shape, legacy deployment was
already constraining development. If people recall, Simon
Spero proposed an HTTP-NG that is remarkably close to QUIC
given that over twenty years separate them. We couldn't deploy
because there was too much legacy.
He wasn't the only one who saw the need. What I've come to
realize is that conditions change over time. Sometimes those
conditions facilitate improving or transitioning away from legacy
protocols, sometimes they don't. The trick is to recognize when
conditions facilitate useful change and take advantage of that.
We need to
stop being so dogmatic about IETF process. It is not like it
is working for us now or has worked well in the past. I
propose a different approach
1) Keep the
two standards process levels but drop the notion that Internet
Standard means the work is finished and will never change.
Specifications that are being used by real people will always
need some level of maintenance. Maybe not a great deal but
some. Add an additional status LEGACY to describe the specs
that are in widespread use, have been superseded but are not
going to go away for decades if ever. IPv4 would be a good
candidate for LEGACY as it will always be with us just as the
6502 microprocessor will always be with us in standard cell.
Even if that is after
0.0.0.0/0 has been declared
(reserved for private use).
2) Recognize
the fact that HTTP/1.1, TLS/1.2, PKIX etc are now Internet
Standards. They may not be perfect, the implementations may
not be fully interoperable but anyone who wants to make a Web
browser (for example) is going to have to support those specs
for a considerable time to come. Declaring TLS/1.2 an Internet
Standard does not mean ending development of TLS/1.3, it is
merely recognizing the situation.
I would not support that, but I would support writing an
applicability statement about each of them saying essentially what
you've said above - they have these known problems but
implementing them in browsers is for the time being a practical
necessity.
3) Establish
a maintenance WG in each area that is the place for continuing
incremental development of standards. The Security area has
LAMPS. I think that instead of the 'do the work, shut the WG'
model applying to every WG in IETF, the maintenance groups
should act as standing committees for very limited updates.
That strikes me as an invitation to disaster unless somehow
groups can be made to avoid constantly doing new work of dubious
value, and also avoid mission creep. We are producing too many
RFCs of dubious value as it is, and IESG is already taxed
reviewing the drafts.
Keith