Brian,
On 7/27/2011 8:50 PM, Brian E Carpenter wrote:
Expected is one thing; but even the IESG's own rules do not *require* this.
http://www.ietf.org/iesg/voting-procedures.html
First, the written rules do not matter much, if actual behavior runs contrary.
Second, expectations constitute a real burden on people, and the discussion was
about the actual burden. The claim that ADs are expected to read all
submissions has been consistent for some years, from many different folk
including ADs.
I certainly never like ballotting NO OBJECTION unless I had at least
skimmed the draft and read at least one positive review. I'm sure I
never ballotted YES without a careful reading.
On matters of human behavior, one of the continuing challenges in the IETF is
the pervasive tendency for a sole speaker to cite their own behavior as if it
were representative of a broader population. (The formal term is "sampling
error", I believe.)
There is a basic difference between a single exemplar versus the aggregated
behavior of a population. The former is useful to demonstrate an existence
proof, such as an exception. It says nothing about the behavior of others in a
group.
In other words Brian, you are a great guy. You were diligent, reasonable and
balanced when you held these positions and asserted your authority. No one
should consider criticizing any aspect of how you did the job.
Forgive me, but none of that matters for this discussion. Your own behavior,
back then, is irrelevant to this discussion, since no one said that all ADs have
always behaved a certain way.
What is relevant is the /current/ demands on ADs.
Is all of this effort justified if exactly one minor error is found each
year? Two? One major? What is that /actual/ improvement that this
process produces? What thresholds should apply?
It depends on what you mean by minor or major, but my experience as an
author is that I get valuable comments from the IESG that definitely
improve the document, even after all the comments one gets from
WGLC and IETF LC and the various review teams.
Brian, you missed the point motivating my questions: I'm suggesting that
demanding the retention of massive demands on ADs and the imposition of
significant procedural burden on working groups and authors warrant analysis.
Do the analysis, please. Make it balanced. Look at pros and cons. Answer your
own question, while formulating a serious case for a situation that creates
chronic problems. The complaints about the current arrangement are not new or
isolated. Unfortunately, the defense of the arrangement is typically in the
style you are giving.
Besides the skewed perspective -- oh, we caught one error, two years ago, so
it's worth burdening everyone all the time -- the perspective ignores two key
points, observed by many folk:
1. We have many sources of reviews; the reviewing capabilities of an AD are
likely to be good, but not better than many others.
2. In spite of all the wonderful reviewing done by ADs, errors still get
through; so the fact that some ADs, sometimes, might catch an important error
ignores all the others they miss.
Again, Brian:
Offer a cost/benefit formula and then factor in the known, considerable,
current costs and demonstrate that we are deriving the requisite benefit. Try
to explore real-world behaviors, rather than theoretical hopes.
Why? My guess is that it's because that the buck stops with the IESG - and
Except that it doesn't stop with the IESG. (Actually, I don't really know what
it means to claim it stops with the IESG, and I suspect there are valid claims
that it stops in a variety of other places.) Many conditions and actions, over
a long period of time, are required to make a specification succeed.
It might be interesting for you to document examples of repercussions on the
IESG for having approved a terrible specification.
At the least, please explain what your assertion means, exactly, and then
explain what the repercussions are on the IESG when they make the egregious
error of letting a serious error get published.
This sort of discussion calls for restraint from using comforting, Trumanesque
catch-phrases.
There is, indeed, a current model that places the personal assessments of ADs on
a pedestal of power. It invites mere mortals to work under the myth that they,
alone, carry the burden of keeping the Internet safe from bogus technologies.
It's a silly myth, but it's the one that currently in force.
A more tractable model is that ADs need a manageable workload and a modest
set of responsibilities, balancing what is demanded of them with the practical
benefit it provides and the costs to them (and us) that are incurred.
Further, the model needs to be comfortable with having everyone make lots
of errors, in spite of their being diligent.
that automatically makes people more conscientious. Everybody else can
<shrug>; the final approving body can't.
In other words, if you "fix" this "bug" by moving the final responsibility
elswehere, you will just move the problem to the same place.
Perhaps you missed the trigger to this thread, which had to do with processing
bottlenecks and inappropriate Discusses?
That is, the presumption in your comment -- and your comment represents
a broad constituency in the IETF -- is that reviewing only has positive
effects of the review
Of course, there can be cases where that is not so - in fact, that's
the main reason that the IESG defined the DISCUSS criteria a few years
ago.
Have you seen a pattern of having a Discuss cite the criterion that justifies
it? I haven't. It might be interesting to attempt an audit of Discusses,
against the criteria...
It well might be true that omitting the AD reviews would increase the
number. By how much? To what effect?
Hard to tell. But it would amount to giving the IESG secretary a large
rubber stamp *unless* the final responsibility was explicitly moved
elsewhere.
Herein lies the real problem: As with many process and structure discussions in
the IETF, folk often see only a simplistic, binary choice between whatever they
prefer, versus something akin to chaos.
The world is more nuanced than that, and the choices more rich.
Here is a small counter-example to your sole alternatives of status quo or
rubber stamp:
Imagine a process which requires a range of reviews and requires ADs to
take note of the reviews and the community support-vs-objection.
Imagine that the job of the ADs is to assess these and to block proposals
that have had major, unresolved problems uncovered or that lack support, and to
approve ones that have support and lack known, major deficiencies as documented
by the reviews.
That ain't a rubber stamp, Brian.
d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf