On 1/28/2013 3:12 AM, Stephen Farrell wrote:
On 01/28/2013 04:27 AM, Joe Touch wrote:
...
If this is an experiment, then you presumably answers to the following
questions:
1- what is your an hypothesis?
2- what you intend to measure?
3- what is your 'control' against which to compare the results?
4- what is your objective metric for success/failure?
Well, its not a scientific experiment, and nor does it need to
be. Quoting 3933:
- " A statement of the problem expected to be resolved is
desirable but not required (the intent is to keep the firm
requirements for such an experiment as lightweight as possible).
Similarly, specific experimental or evaluative criteria, although
highly desirable, are not required -- for some of the process
changes we anticipate, having the IESG reach a conclusion at the
end of the sunset period that the community generally believes
things to be better (or worse) will be both adequate and
sufficient. "
My take is that even though 3933 says "desirable" a couple of
times there, it's probably best to not go there, at least in this
case, but for now probably more generally. The reason I think
that is that I reckon the IETF has got itself into a log-jam
that almost prevents us from modifying our processes in any way
and every additional word provides additional barriers to doing
anything, since there'll always be someone for whom those added
words raise what seems to them to be a red flag.
Lightweight != vacuous.
Perhaps the lack of the discussion of these issues is part of the reason
so many previous proposals have not been tried.
That isn't a reason to ignore those issues here. But a bigger concern is
your reasoning as presented in this response:
I've heard only one hypothesis - that this reduces time to publication.
I disagree that this is a useful hypothesis to test for the following
reasons:
- time to publication isn't a goal of the IETF
IMO, any doc that isn't useful in 5 years ought
to not be published here; we don't need to
document every sneeze
IMO reduced time to publication is definitely *a* goal.
I've heard lots of IETF participants complain about how
long things take, lots of times. Perhaps you're in the
rough on that. (I also note that this experiment doesn't
actually aim for any major reduction in time to publication
as it says in the draft.)
There are plenty of places where existing process is in a logjam. My
experience over the past 15 years is most of the delay happens during
the following:
- changes in the IETF process where ideas like this throw
a wrench into the process, and create confusion
I had a few informational docs wait over a year
in the queue because a process change was put
into effect before the necessary boilerplate
was resolved.
--> this has a *simple* solution. Grandfather everything that
has been submitted to a given queue to changes in that queue's
process.
- IESG review, during which issues are often raised that
were addressed during WG review that are then re-hashed,
even though they are not relevant to the area of their
appointment
Overlapping community review has no impact on either of these.
The draft itself also discusses reasons why running code
might also lead to better quality specifications.
No disagreement there, but "better quality" doesn't mean the doc
wouldn't still need - and substantively benefit from - serial review in
increasingly larger communities.
- thorough review ought to be a requirement
and this 'experiment' potentially compromises that
by reducing the overall time of review
I think the likelihood of that doing damage during the
experiment is very small.
Damage =
- publishing ideas not sufficiently vetted
that later need to be updated/errata'd
- wasting the community's time by having a
large group review an issue that could have been
addressed and corrected within a smaller community
Do you seriously think these concerns should outweigh a few weeks of
reduced time to publication?
In addition, I might be wrong,
but the thoroughness of review doesn't correlate that
well with the duration of review would be my guess.
I've heard many on this list - including myself - point out specific
reasons why this should not be believed.
If still believe every process can be accelerated, I encourage you to
review Brooks "the Mythical Man Month".
Having this entire community burn cycles on this document speaks for
itself. It should have been vetted in a smaller, more invested community
first.
I'm following the 3933 process.
>
I don't know of any smaller but open venue for discussing rfc
3933 experiments. Perhaps there ought be such, but given how
rarely 3933 experiments are proposed, that'd be a very quiet
list.
This is a perfect issue for a bar BOF, FWIW.
Joe