Re: bettering open source involvement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 2016-08-02, at 9:10, Dave Taht <dave.taht@xxxxxxxxx> wrote:
> On Mon, Aug 1, 2016 at 4:36 PM, Eggert, Lars <lars@xxxxxxxxxx> wrote:
>> On 2016-08-01, at 15:44, Livingood, Jason <Jason_Livingood@xxxxxxxxxxx> wrote:
>>> What if, in some future state, a given working group had a code repository and the working group was chartered not just with developing the standards but maintaining implementations of the code?
>> 
>> as an addition to developing specs, that might be useful, if the spec remains the canonical standards output.
>> 
>> "Go read the code" is not a useful answer if the code comes under a license (such as GPL) that taints the developer. (This is a major reason what we are doing IETF specs for DCTCP and CUBIC - so that they can be implemented without needing to read Linux kernel code.)
> 
> Only 10 (?) years after full support for cubic entered the linux
> kernel, and 3 after dctcp.

The Linux community had chosen to actively ignore the IETF for about ten years. This only changed relatively recently.

And, FWIW, Hagen & friends' DCTCP implementation for Linux is based on the initial versions of our DCTCP I-D, and arguably wouldn't have happened without it.

CUBIC has of course existed in independent implementations before, but it is unclear if the BSD licensed ones were actually only done based on Injong's paper.

> If you define the efforts of this standards body as one to produce BSD
> licensed code (which is basically the case), it will continue to lag
> behind the bleeding edge and continue to become more and more
> irrelevant.

I guess we're getting on our soap boxes at this point? :-)

But I don't define "the efforts of this standard body" in this way. I remain convinced that textual specs are required. Code is a nice addition, but really  only useful if it can be rather freely used - which GPL code can't.

> It's not just the deployed code in kernels that is a problem, it is
> also that the best of the tools available to prototype new network
> code are GPL'd. NS3, for example, is gpl.  The routing protocols
> incorporated in bird and quagga are GPL. Bind is BSD, but nominum is
> proprietary and dnsmasq, GPLd.
> 
> There is increasingly no place to design, develop, and test new stuff
> without starting from a gpl base.

I agree that this is a problem. But we can't all start to use GPL for everything.

> Worse, what happens here at ietf without use of these tools, is that
> we end up with non-open-source code's experiments and results being
> presented, without any means for an independent experimenter to
> verify, reproduce, or extend.

That's a stretch. The alternative to GPL is not closed source. There are other, friendlier OSS licenses around.

> I think it would do a lot of semantic good if the ietf would stop
> referring to "open source"[1] and always refer directly to the
> licenses under which the code it works on that are allowed. There are
> certainly new areas of interest like npv, etc, that are proceeding
> with more vendor-friendly code licensing schemes, although I am
> dubious about the performance benefits of moving all this stuff into
> userspace, particularly when a seeming, primary, goal is to avoid
> making free software, rather than engineering a good, clean, correct
> engineering solution.
> 
> It has been my hope that since the alice decision re patents (80% of
> disputed software patents being invalidated), the rise of
> organizations offering patent pool protections like the open
> inventions network, and I think (IANAL), that apis cannot be
> copyrighted in google vs oracle - ends up meaning that a developer can
> not longer be polluted merely by looking at GPL'd code once in a
> while. Because we do.

As much as I want to agree, if you work for a commercial entity, the risk is just too great (cf. the GPL clause regarding implicit licenses to patents).

> The actual implementations of anything for anything else will tend to
> vary so much due to API differences, and the expressible logic in the
> algorithms themselves generally simple, that, particularly when the
> authors of the code have presented it for standardization, under any
> license, that the exposure to further risk is minimized.

Sure. But the risk is incorporating code that may be GPL-tainted into non GPL'ed code bases. In other words, it's not the code itself that is a risk, it is a risk for the codebase it is used from.

> There are powerful advantages to the GPL (and LGPL[2]) over
> "standardization". Notably there is an implicit patent grant, and
> ongoing maintenance is enforced by an equal spirit of co-operation.
> It's a better starting point than to hang with a sword of Damocles
> over your head wondering if someone will patent something out from
> under you.

That's certainly one viewpoint.

Lars

> I wish we could just get on with making the internet a better place.

Sorry, but I really don't understand how this discussion is not trying to help with just that?

Lars

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]