Re: bettering open source involvement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 1, 2016 at 4:36 PM, Eggert, Lars <lars@xxxxxxxxxx> wrote:
> Hi,
>
> On 2016-08-01, at 15:44, Livingood, Jason <Jason_Livingood@xxxxxxxxxxx> wrote:
>> What if, in some future state, a given working group had a code repository and the working group was chartered not just with developing the standards but maintaining implementations of the code?
>
> as an addition to developing specs, that might be useful, if the spec remains the canonical standards output.
>
> "Go read the code" is not a useful answer if the code comes under a license (such as GPL) that taints the developer. (This is a major reason what we are doing IETF specs for DCTCP and CUBIC - so that they can be implemented without needing to read Linux kernel code.)

Only 10 (?) years after full support for cubic entered the linux
kernel, and 3 after dctcp.

If you define the efforts of this standards body as one to produce BSD
licensed code (which is basically the case), it will continue to lag
behind the bleeding edge and continue to become more and more
irrelevant.

It's not just the deployed code in kernels that is a problem, it is
also that the best of the tools available to prototype new network
code are GPL'd. NS3, for example, is gpl.  The routing protocols
incorporated in bird and quagga are GPL. Bind is BSD, but nominum is
proprietary and dnsmasq, GPLd.

There is increasingly no place to design, develop, and test new stuff
without starting from a gpl base.

Worse, what happens here at ietf without use of these tools, is that
we end up with non-open-source code's experiments and results being
presented, without any means for an independent experimenter to
verify, reproduce, or extend.

I think it would do a lot of semantic good if the ietf would stop
referring to "open source"[1] and always refer directly to the
licenses under which the code it works on that are allowed. There are
certainly new areas of interest like npv, etc, that are proceeding
with more vendor-friendly code licensing schemes, although I am
dubious about the performance benefits of moving all this stuff into
userspace, particularly when a seeming, primary, goal is to avoid
making free software, rather than engineering a good, clean, correct
engineering solution.

It has been my hope that since the alice decision re patents (80% of
disputed software patents being invalidated), the rise of
organizations offering patent pool protections like the open
inventions network, and I think (IANAL), that apis cannot be
copyrighted in google vs oracle - ends up meaning that a developer can
not longer be polluted merely by looking at GPL'd code once in a
while. Because we do.

The actual implementations of anything for anything else will tend to
vary so much due to API differences, and the expressible logic in the
algorithms themselves generally simple, that, particularly when the
authors of the code have presented it for standardization, under any
license, that the exposure to further risk is minimized.

There are powerful advantages to the GPL (and LGPL[2]) over
"standardization". Notably there is an implicit patent grant, and
ongoing maintenance is enforced by an equal spirit of co-operation.
It's a better starting point than to hang with a sword of Damocles
over your head wondering if someone will patent something out from
under you.

I wish we could just get on with making the internet a better place.

> Lars

[1] The GPL is a considered an acceptable license under the terms of
the open source inititatives:

https://opensource.org/licenses/alphabetical

[2] Of all the open source licenses out there, I happen to like the
LGPLv2 the best. It is only viral if you make changes to the library.


-- 
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]