Re: stable/LTS test report from KernelCI (2023-12-08)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/12/2023 14:07, Greg KH wrote:
> On Mon, Dec 11, 2023 at 11:14:03AM +0100, Guillaume Tucker wrote:
>> On a related topic, it was once mentioned that since stable
>> releases occur once a week and they are used as the basis for
>> many distros and products, it would make sense to have
>> long-running tests after the release has been declared.  So we
>> could have say, 48h of testing with extended coverage from LTP,
>> fstests, benchmarks etc.  That would be a reply to the email with
>> the release tag, not the patch review.
> 
> What tests take longer than 48 hours?

Well, I'm not sure what you're actually asking here.  Strictly
speaking, some benchmarks and fuzzing can run for longer than
48h.

What I meant is that testing is always open-ended, we could run
tests literally forever on every kernel revision if we wanted to.
For maintainer trees, it's really useful to have a short feedback
loop and get useful results within say, 1h.  For linux-next and
mainline, maybe more testing can be done and results could take
up to 4h to arrive.  Then for stable releases (not stable-rc), as
they happen basically once a week and are adopted as a base
revision by a large group of users, it would make sense to have a
bigger "testing budget" and allow up to maybe 48h of testing
efforts.  As to how to make best use of this time, there are
various ways to look at it.

I would suggest to first run the tests that aren't usually run
such as some less common fstests combinations as well as some LTP
and kselftests suites that take more than 30 min to complete.
Also, if there are any reproducers for the fixes that have been
applied to the stable branch then they could be run as true
regression testing to confirm these issues don't come back.  Then
some additional benchmarks and tests that are known to "fail"
occasionally could also be run to gather more stats.  This could
potentially show trends in case of say, performance deviation
over several months on LTS with finer granularity.

>> I've mentioned before the concept of finding "2nd derivatives" in
>> the rest results, basically the first delta gives you all the
>> regressions and then you do a delta of the regressions to find
>> the new ones.  Maintainer trees would be typically comparing
>> against mainline or say, the -rc2 tag where they based their
>> branch.  In the case of stable, it would be between the stable-rc
>> branch being tested and the base stable branch with the last
>> tagged release.
> 
> Yes, that is going to be required for this to be useful.

OK thanks for confirming.

>> One last thing, I see there's a change in KernelCI now to
>> actually stop sending the current (suboptimal) automated reports
>> to the stable mailing list:
>>
>>   https://github.com/kernelci/kernelci-jenkins/pull/136
>>
>> Is this actually what people here want?
> 
> If these reports are currently for me, I'm just deleting them as they
> provide no value anymore.  So yes, let's stop this until we can get
> something that actually works for us please.

Right, I wasn't sure if anyone else was interested in them.  It
sounds like Sasha doesn't really need them either, although he
wrote on IRC that he wouldn't disable them until something better
was in place.  I would suggest sending at least an email to the
stable list to propose to stop these emails with a particular
date and ideally some kind of plan about when some new emails
would be available to replace them.  But if really nobody else
than you needs the current emails, then effectively nobody needs
them and we can stop now of course.

Cheers,
Guillaume




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux