On 12/1/2012 1:00 PM, Melinda Shore wrote:
On 12/1/12 11:36 AM, Dave Crocker wrote:
What actual problem is this trying to solve? I see the reference to a
'reward', but wasn't aware that there is a perceived problem needing
incentive to solve.
I gather this is one of those "everybody knows" problems, where
"everybody knows" that it takes what's perceived as too long to
get documents through the post-wglc/pre-publication process.
Yes. Longstanding opinion held by many folk. Might even be valid.
The problem is a failure to look carefully at wg lifecycle and consider
where meaningful -- as opposed to 'appealing' -- improvements can be made.
At a minimum, any proposal for change should be expected to justify the
specific problem it is claiming to solve -- that is, to establish the
context that makes clear the problem is real and serious -- and that the
proposed solution is also likely to have meaningful benefit.
I share the frustration about lengthy standardization, and particularly
with delays at the end. And certainly there is nothing wrong with
adding parallelism where it makes sense.
However absent a consideration of the lifecycle, the current proposal is
a random point change, quite possibly an example of looking for lost
keys under a lamppost because that's where it's easiest to see.
There's probably some sort of sympathetic vibe running between
this document and recent discussion of nearly-cooked work being
brought to the IETF for standardization.
rumblings of free-floating dis-ease, perhaps. but are they really related?
If somebody hasn't already documented how long it takes to get
through the various steps once a document is into wglc, it
would be worthwhile to start taking notes.
If a wg takes 2 years to get into wglc, a difference of a month doesn't
matter, does it? That's why I mean about total lifecycle. Otherwise
we're committing the classic system engineering error of inappropriate
local optimization.
d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net