Re: Author and attendance measurements [Was: Re: Thought experiment [Re: Quality of Directorate reviews]]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Nov 8, 2019, at 08:11, Jari Arkko <jari.arkko@xxxxxxxxx> wrote:
> 
> 2015	698
> 2016	711
> 2017	630
> 2018	541
> 2019	370

>> 370*(365.0/Time.now.yday)
=> 431.4696485623003

More importantly, the RPC is constipated by the v3 switch, so those numbers aren’t very indicative.  Maybe looking at

>> 370*(365.0/Time.parse("2019-09-16").yday)
=> 521.4285714285714

gives a clearer indication that we are about where we were in 2018.  But really, there should be some metric run over the RFC editor queue as well.

OK, so much for playing scientist here.  Now for the process wonk in me (which I’m definitely only playing).

It seems to me all the proposals to add work for everyone to get better metadata for making these measurements are non-starters.  A little more information can be extracted out of the existing information in the datatracker, which would make a nice project for a student maybe, but the main contributors all have way more important things to do.

So what do we learn?  
Citing this snippet again:

> My impression from the regular plenary reports on attendance my impression is
> that the number of people involved in the IETF is declining, yet the number
> of RFCs being processed is going up.

There are lots of impressions here, which as we see often aren’t supported (or even are contradicted) by the indicators we have.
E.g., my “impression” also is that conflicts are going up, but maybe that is just a facet of the IoT work I’m engaged in going mainstream and impacting/being impacted by more WGs.

The original idea in some of the contributions to this thread, which in my words I’d summarize as lobotomizing the quality engineering we do so we can get more candidates for ADs, is a non-starter as well.  Most of the arguments in this thread for this are bolstered by statements about what specific participants of the process *should* be doing (also known as process confabulation in software engineering), not by observations about reality.

That we don’t have a good alternative to the current process doesn’t mean we can’t tune some knobs, and I’d prefer if we’d focus on what these knobs are.  Strengthening the directorate review system (with the objective to reduce variance in review quality and timing) is very well worth some effort.  Feeding more early cross-area review into the WG process that precedes the IESG review also would help; we have been trying to do this but probably need to be more consistent in these efforts.

All in all I think that making someone responsible for proposing tuning actions and observing and evaluating the results, preferably over a longer time than a single AD tenure, might help.  Not in the sense of “wenn ich nicht mehr weiter weiß, bilde ich ‘nen Arbeitskreis” (not sure that this aphorism about the tendency of German bureaucracy to shift responsibility to useless committees is translatable), but in the sense of creating actual responsibility.  Sweeping actions would still need to go through our consensus process, which makes them very expensive, but continued attention on smaller tweaks might already help a lot.

Grüße, Carsten





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux