Re: Linux backports CII badge and run time testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, May 7, 2016 at 4:13 AM, Hauke Mehrtens <hauke@xxxxxxxxxx> wrote:
> Hi Luis,
>
> Did you apply for this? What are the benefits of this?

Yes -- the benefits are that you push the software project under
application under an honest heavy scrutiny with best practices advised
by Core Infrastructure Initiative (CII). That sounds vague but, the
skinny is, to help reduce attack surfaces in FOSS. To understand the
CII intentions one has to understand how it got spawned... The
heartbleed bug was one trigger that made us start questioning -- WTF
!? How can we do better. The Best Practices are an empirical response
lead through a serious n academic evaluation. I applied for this as a
proactive measure to see where we stand as a project. IMHO every FOSS
should apply and strive to meet the criteria for the badge.
Mathematically, if you really are honest, if you get the badge --
we're in a better place in the community.

> On 05/05/2016 12:34 AM, Luis R. Rodriguez wrote:
>> As per the Core Infrastructure Initiative guidelines we now meet the
>> requirements for a badge, the details of the submission is here:
>>
>> https://bestpractices.coreinfrastructure.org/projects/111
>>
>> A lot of it just required updating our documentation to enable folks
>> to report security issues. If there are things that need to be
>> adjusted please let me know. A lot of this follows the Linux kernel
>> submission for a lot of things:
>>
>> https://bestpractices.coreinfrastructure.org/projects/34
>>
>> Things we need to improve on though are automated tests specific to
>> backports against a series of kernels, and also providing a bit of a
>> description when we make new releases. I realize that is hard, but its
>> also hard for Linux, we have no reasons no to be able to do that as
>> well.
>
> Doing predictable releases cost some constant effort. I do not know if
> we have the resources to do predictable releases.

Same can be said about Linux but hey 0-day came around :D

Now granted for us its different as we also have an array of
kernels... so that Linux * number of kernels we support that needs
testing, but as I noted to Jouni, we really only need certain sanity
tests, and with time perhaps we can zero in further.

> Security support could also be a problem because then we have to react
> pretty fast when someone sees a problem in one of the drivers we ship.

You're right but security-wise the main attack surface really is
Linux, so a security issue that *is* part of Linux, belongs upstream
on Linux, and not backports. So for instance, just as Linus has put
down the law on asking stable fixes to go through him first before any
stable kernels, I think we should also just require security / fixes
to go always upstream, and we never carry any delta, best we can do is
wait for upstream integration to a new release of Linux, or a new
stable release.

The still is a small gap in terms of security that we should cover
though, and that is that about 1-2% of actual backport code we mend
Linux for. If a security issues lies in there *we* should have a
policy for letting people privately report and for us to fix in a
timely manner. That's some room for improvement. Given the attack
surface is about 1-2% of code, and based on an empirical evaluation of
our bugs on kernel.org bugzilla -- I don't think this is a huge
burden, correct me if I'm wrong. Thoughts?

That is to say: I don't see this requiring much effort, specially if
we keep trimming older kernels as we did last time,

> Otherwise fixing the security problems should not be a big deal as I
> assume that most security problems will be already fixed in upstream
> Linux kernel.

Indeed.

>> As for run time testing, we know folks out there in the industry
>> already use backports and do their own run time tests against drivers,
>> and this may be automated, we however need something more, at the very
>> least a boot.
>
> Build testing and testing with simulated drivers like mac80211_hwsim
> should be doable, but testing all the device drivers which serve real
> hardware is nearly impossible.

Agreed.

>> Looking for something minimal to start off, how about this:
>>
>> https://github.com/michal42/qemu-boot-test
>
> yes that should be doable to have a qemu boot test which also checks if
> loading the modules and unloading works.

Cool!

  Luis
--
To unsubscribe from this list: send the line "unsubscribe backports" in



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux