Re: Time to say "NO!!" to AUTH4200 (Re: AUTH48 checking the different formats (Re: [rfc-i] Public archival of AUTH48 communications))

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



While I am in favor of greater transparency in this area and I feel that the discussion has been helpful, I think the fundamental issue is that the AUTH48 process has gotten out of control, which it clearly has, given that there are documents that have languished in this state for over 25 weeks rather than 48 hours, which was the original intention.   I can see no justification for this expansion to 25 x 7 x 24 =  4200.

Perhaps with the formatting issues that now exist there is a need to be more realistic and move to AUTH168 or even AUTH192 but a line clearly has to be drawn and AUTH4200 is on the wrong side of it.

It gives time for issues on which consensus has been declared to be relitigated, perhaps endlessly, and that is clearly bad.  Another source is ADs showing an inappropriate degree of tolerance for lazy authors and that has to stop somehow.9oj

The IESG has tried to address this issue in a statement on 1/5/2006 but, here we are 16 years later and the problem still exists.

I'd like to ask all ADs with documents beyond the AUTH1000 point to investigate and suggest ways to address the issue for real.  




On Mon, Feb 28, 2022, 3:37 AM Carsten Bormann <cabo@xxxxxxx> wrote:
>> With the move to XML and renderings of a new RFC in different
>> formats, who is responsible for reviewing the different
>> renderings for unintended changes in meaning? If it's the
>> authors, I'd hope that a change in AUTH48 might offer some way
>> to spread the burden.
>>
>> FWIW
>
> Larry, based on the last document with which I went through
> AUTH48 (in the first half of 2020), it wasn't clear who, if
> anyone, was responsible for checking the different formats and
> renderings. 

“Responsible”: The authors of course (in theory).

> Of course, the theory is that, if the XML is
> correct, everything else should take care of itself. 

Well, that isn’t even a theory.
Xml2rfc has its mysterious ways.

More importantly, implementers will look at a rendering (TXT, HTML/PDF).
If that is confusing, you will have interoperability problems (or low take-up).
So authors will typically check one of the renderings in some detail.
(Only with a lot of luck, they will seriously both TXT and HTML or PDF, which unfortunately is really needed in particular since 3.10.0.)

Where aspects of the XML are not visible in the renderings, the RPC has started to ask specific questions.
E.g., for the type= attributes of <sourcecode elements.
I’m not sure there is a checklist of invisible aspects.
It would help to have an debug-the-XML rendering where all these invisible aspects become visible.

In summary, I think that a theory where authors are responsible for the XML will stay fiction.
The RPC is much better off doing this, being used to operate in that noisy environment, and (I hope) more aware what went wrong in previous RFCs.

Grüße, Carsten


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux