Re: Cross-area review (was Meeting rotation)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ralph,

On 12/21/2015 10:36 AM, Ralph Droms wrote:
On Dec 20, 2015, at 12:56 PM 12/20/15, Dave Crocker
<dhc@xxxxxxxxxxxx> wrote:
Well...  That depends on how reviews are done and how the
kismet-contacts affect that.

Massive numbers and types of problems get missed by the reviews
that are currently done.

Can you say more here?  How do you know these problems are being
missed?  When you write "missed by the reviews that are currently
done", do you mean missed altogether or missed by, say, the WG review
and picked up in IESG review?

     1. We have errata.

We wouldn't have them if published RFCs did not /regularly/ have problems. So for all the reviewing we do, we still choose to have an institutional mechanism for recording errors in the published specs.

And note that the current rather odd policy for accepting errata is that they can only mean "the rfc didn't properly state was intended." That is, if the wg intended the wrong thing and people now understand what the right thing is, that is not allowed to be recorded as an errata. So the only errata are straight-up documentation errors.

     2. I've directly seen published RFCs that had gaping errors

(and I don't have special specification-reading skills, nor are the specs I read all that special.)

By way of example, the original DKIM RFC was massively reviewed and implemented and yet the explicit specification of algorithms were wholly deficient, while the accompanying prose was pretty good. Nobody had ever done a careful correlation audit.

3. We have no discipline to reviews and no documentation to give careful guidance.

Reviews are a random walk of well-intentioned people who have varying skills, varying attention-spans, etc. The results are, therefore, random.


In other words, it is merely a certainty that serious problems will slip through. Every review is good to do, but no specific review is the savior of us all. (The ultimate fallacy of a savior model is requiring expenditure of scarce, strategic area director resources on late-stage reviews... Expensive resources, frustrating delays, minimal benefits. Who wouldn't want all that?)

The main benefit of a cross-area review is that it is a cold reading by someone with no history with the effort. Fresh eyes.


Given the number of problems I see in some documents during Int-Dir
or Gen-ART, I'm sure there are more problems that I'm not picking up.

Guaranteed.


It's a reasonable inference that there are problems not caught by any
of our reviews.  I'm curious about specifics...

Before going from Proposed to Full standard, we expect some breadth of community deployment. That includes the expectation that fresh eyes of fresh implementers have tried to take the spec and build something interoperable. We don't actually require this. But at least it's implicit as an evaluation criterion of maturity.

We could move that requirement to Proposed, but that would merely make Proposed essentially the same as Full. And the barrier to Proposed is already too high.

Instead we should understand that we cannot and should not try to demand or expect documents that are perfect. We should demand 'good enough' and let the outside world evaluate and feed the results back to us.

The way they already do.

The problem is our mythology before that and the misguided expectations and barriers created as a result.

d/


--
Dave Crocker
Brandenburg InternetWorking
bbiw.net




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]