Re: letting IETF build on top of Open Source technology

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It seems to me useful to look at it from a somewhat different perspective. Given that we are trying to work effectively with the open source communities, it behooves us to be clear about what it takes for us to normatively reference their work. Whether there exist artifacts that are open source and meet the requirements or not, laying out our expectations provides a clearer path forward.

Yours,
Joel

On 10/31/17 4:43 PM, Toerless Eckert wrote:
I was just asking for an example of well documented FOSS software
that could be accepted for standards track reference by IETF. I can't
come up with any.

In my unfortunate experience, RTFS is the standard answers when it
comes to FOSS specifications.

I would like the process where FOSS explores a problem field and
is used to start standardiation. I just don't think that the standardization
of interoperability behavior comes for free.

Cheers
     Toerless

On Mon, Oct 30, 2017 at 10:29:45PM +0100, Riccardo Bernardini wrote:
On Mon, Oct 30, 2017 at 9:40 PM, Toerless Eckert <tte@xxxxxxxxx> wrote:

On Sun, Oct 29, 2017 at 11:39:34PM -0400, Alia Atlas wrote:
Personally, I have not seen it work well or be generally perceived as
anything more than a waste of time to ask well specified and widely
available mature open


^^^^^^^^^^^^*

source work to come and be republished as an Independent Stream
Informational RFC.



I guess that  "well specified" is a cornerstone word here: _if_ the OS
protocol was well specified (and with this I mean in an unambiguous way,
with enough detail to allow interoperability with other implementations and
in a stably referenced document), I would agree that re-publishing it is a
waste of time and I would have no qualms in using it in an RFC.
However, my personal experience with OS software (both as users an --
partly -- as a contributor) is that documentation in some cases can be
really lacking and the source code is considered the ultimate documentation
(I remembered that few years ago I asked  details about the format of
octave data files and I was told to look at the source code).

One could think about considering a snapshot of a specific version of the
source code as the protocol specs; unfortunately source code is not good
for that: first, you need to reverse engineer it in order to deduce -- say
-- the format of a packet; secondly often is *not* unambiguous, for
example: if you declare a packet field as "int" in C, how many bits is?
Sure, the code will run smoothly with current processors since int is 32
bits everywhere, but what if you are going to communicate with a processors
whose int is 64 (or 16)-bit wide?  (and I did not mention "efficiency
tricks" that are ill-defined according to the C specs, but work as long as
you stick to gcc).

One could object that if there is an open source implementation, there is
no need for unambiguous specs since one can just use the existing
implementation. However, I see this as a  very weak solution: someone else
could need to create a new implementation, maybe in a different language,
maybe for a different architecture not compatible with the current
implementation.

Summarizing, if the cornerstone condition "well specified" is met, I think
that the OS protocol can be considered as a protocol developed by any
standard body (this suggest that condition "well specified" is quite
demanding), but if the condition is not met, the only solution I can think
of is to "snapshot" it (and clean it, if necessary) into an Informational
RFC.

Riccardo





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]