On 12/23/2015 5:46 PM, Ted Lemon wrote:
Dave Crocker wrote:
AD review appears to miss far more than it catches. That's seems
to be difficult for some folk to accept, and so they point to the
individual exceptions rather than the longer pattern of misses.
(By way of some very strategic examples, consider IPv6 and DNSSec
and email security and key management and...)
Do you have data on this?
Anecdotal. Mine. Over enough years to represent a pattern. (I'm not
alone in this, but I'm reporting my own experience) In 25 years, not one
single RFC I've worked on had a serious problem caught by an AD, though
many were eventually discovered to have serious problems. Some were
delayed by large numbers of non-substantive or flat-out-wrong AD
Discusses, however. So we got significant costs with insignificant
benefits and significant damage.
Yes, there are legitimate issues caught. Again, my point is not enough
(and by the way, not nearly soon enough.)
This doesn't match my experience. I've found AD review to be very
helpful.
I find /all/ reviews "helpful". But that marginal benefit does not
justify the strategic costs of consuming scarce resources, calendar
delay, participant frustration, etc. by imposing them at the end of a
long development cycle.
It's certainly inconvenient,
Inconvenient is such a mild word. The aggregate effect of these kinds of
hassles is decisions by potential participants to take their
specifications elsewhere.
but I've seen AD reviews catch lots of things that were worth
catching.
Again, that's not the issue. Every review by every reviewer tends to be
performed honestly and well and to provide useful feedback. However
that, unfortunately, does not justify the particulars of AD reviews.
Please go back and consider the /balance/ of costs and benefits I cited.
The benefit of periodically finding something significant needs to be
balanced against the costs of resource consumption, significant issues
missed, delays, and participant frustration.
I would not go so far as to claim that my anecdote is more valid than
yours, but I think that if we are going to make claims like this, we
should probably back them up with data. The current AD review
process didn't just happen.
25 years ago, the IETF was a very, very different place. We've kept the
formal structures put in place then, without adapting to the changes.
Simply put, what we put in place doesn't scale. We're supposed to know
something about scaling issues, but we apply none of it to
administration of the IETF...
On 12/23/2015 5:54 PM, Eric Burger wrote:
I think the analogy here is that AD review, cross-area review, and
sacrifices to Amaterasu will not find ALL of the bugs in a
specification. The best we can hope for is to do best practices to
keep the bugs to an acceptable, low level.
The last sentence if quite reasonable, as long as it include the balance
clause, "at an acceptable cost" and as long as we then fairly consider
costs.
d/