Re: [RFC] Test catalog template

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,


---- On Fri, 18 Oct 2024 04:21:58 -0300 David Gow  wrote ---

 > Hi Don, 
 >  
 > Thanks for putting this together: the discussion at Plumbers was very useful. 
 >  
 > On Tue, 15 Oct 2024 at 04:33, Donald Zickus dzickus@xxxxxxxxxx> wrote: 
 > > 
 > > Hi, 
 > > 
 > > At Linux Plumbers, a few dozen of us gathered together to discuss how 
 > > to expose what tests subsystem maintainers would like to run for every 
 > > patch submitted or when CI runs tests.  We agreed on a mock up of a 
 > > yaml template to start gathering info.  The yaml file could be 
 > > temporarily stored on kernelci.org until a more permanent home could 
 > > be found.  Attached is a template to start the conversation. 
 > > 
 >  
 > I think that there are two (maybe three) separate problems here: 
 > 1. What tests do we want to run (for a given patch/subsystem/environment/etc)? 
 > 2. How do we describe those tests in such a way that running them can 
 > be automated? 
 > 3. (Exactly what constitutes a 'test'? A single 'test', a whole suite 
 > of tests, a test framework/tool? What about the environment: is, e.g., 
 > KUnit on UML different from KUnit on qemu-x86_64 different from KUnit 
 > on qemu-arm64?) 
 >  
 > My gut feeling here is that (1) is technically quite easy: worst-case 
 > we just make every MAINTAINERS entry link to a document describing 
 > what tests should be run. Actually getting people to write these 
 > documents and then run the tests, though, is very difficult. 
 >  
 > (2) is the area where I think this will be most useful. We have some 
 > arbitrary (probably .yaml) file which describes a series of tests to 
 > run in enough detail that we can automate it. My ideal outcome here 
 > would be to have a 'kunit.yaml' file which I can pass to a tool 
 > (either locally or automatically on some CI system) which will run all 
 > of the checks I'd run on an incoming patch. This would include 
 > everything from checkpatch, to test builds, to running KUnit tests and 
 > other test scripts. Ideally, it'd even run these across a bunch of 
 > different environments (architectures, emulators, hardware, etc) to 
 > catch issues which only show up on big-endian or 32-bit machines. 
 >  
 > If this means I can publish that yaml file somewhere, and not only 
 > give contributors a way to check that those tests pass on their own 
 > machine before sending a patch out, but also have CI systems 
 > automatically run them (so the results are ready waiting before I 
 > manually review the patch), that'd be ideal. 

This though makes sense to me. It will be very interesting for CI systems to be
able to figure out which tests to run for a set of folder/file changes. 

However, I also feel that a key part of the work is actually convincing people
to write (and maintain!) these specs. Only automation through CI we may be able
to show the value of this tasks, prompting maintainers to keep their files
updated, otherwise we are going create a sea of specs that will just be outdated
pretty quickly.

In the new KernelCI maestro, we started with only a handful of tests, so we could
actually look at the results, find regressions and report them. Maybe we could
start in the same way with a few tests. Eg kselftest-dt and kselftests-acpi. It
should be relatively simple to make something that will decide on testing probe
of drivers based on which files are being changed.

There needs to be a sort of cultural shift on how we track tests first. Just documenting
our current tests may not take us far, but starting small with a comprehensive process
from test spec to CI automation to clear ways of deliverying results is the game changer.

Then there are other perspectives that crosses this. For example, many of the LTP and
kselftests will just fail, but there is no accumulated knowledge on what the result of
each test means. So understanding what is expected to pass/fail for each platform is
a sort of dependance in this extensive documentation effort we are set ourselves for.

Best,

- Gus









[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux