Re: [RFC] Test catalog template

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Don,

Thanks for putting this together: the discussion at Plumbers was very useful.

On Tue, 15 Oct 2024 at 04:33, Donald Zickus <dzickus@xxxxxxxxxx> wrote:
>
> Hi,
>
> At Linux Plumbers, a few dozen of us gathered together to discuss how
> to expose what tests subsystem maintainers would like to run for every
> patch submitted or when CI runs tests.  We agreed on a mock up of a
> yaml template to start gathering info.  The yaml file could be
> temporarily stored on kernelci.org until a more permanent home could
> be found.  Attached is a template to start the conversation.
>

I think that there are two (maybe three) separate problems here:
1. What tests do we want to run (for a given patch/subsystem/environment/etc)?
2. How do we describe those tests in such a way that running them can
be automated?
3. (Exactly what constitutes a 'test'? A single 'test', a whole suite
of tests, a test framework/tool? What about the environment: is, e.g.,
KUnit on UML different from KUnit on qemu-x86_64 different from KUnit
on qemu-arm64?)

My gut feeling here is that (1) is technically quite easy: worst-case
we just make every MAINTAINERS entry link to a document describing
what tests should be run. Actually getting people to write these
documents and then run the tests, though, is very difficult.

(2) is the area where I think this will be most useful. We have some
arbitrary (probably .yaml) file which describes a series of tests to
run in enough detail that we can automate it. My ideal outcome here
would be to have a 'kunit.yaml' file which I can pass to a tool
(either locally or automatically on some CI system) which will run all
of the checks I'd run on an incoming patch. This would include
everything from checkpatch, to test builds, to running KUnit tests and
other test scripts. Ideally, it'd even run these across a bunch of
different environments (architectures, emulators, hardware, etc) to
catch issues which only show up on big-endian or 32-bit machines.

If this means I can publish that yaml file somewhere, and not only
give contributors a way to check that those tests pass on their own
machine before sending a patch out, but also have CI systems
automatically run them (so the results are ready waiting before I
manually review the patch), that'd be ideal.

> Longer story.
>
> The current problem is CI systems are not unanimous about what tests
> they run on submitted patches or git branches.  This makes it
> difficult to figure out why a test failed or how to reproduce.
> Further, it isn't always clear what tests a normal contributor should
> run before posting patches.
>
> It has been long communicated that the tests LTP, xfstest and/or
> kselftests should be the tests  to run.  However, not all maintainers
> use those tests for their subsystems.  I am hoping to either capture
> those tests or find ways to convince them to add their tests to the
> preferred locations.
>
> The goal is for a given subsystem (defined in MAINTAINERS), define a
> set of tests that should be run for any contributions to that
> subsystem.  The hope is the collective CI results can be triaged
> collectively (because they are related) and even have the numerous
> flakes waived collectively  (same reason) improving the ability to
> find and debug new test failures.  Because the tests and process are
> known, having a human help debug any failures becomes easier.
>
> The plan is to put together a minimal yaml template that gets us going
> (even if it is not optimized yet) and aim for about a dozen or so
> subsystems.  At that point we should have enough feedback to promote
> this more seriously and talk optimizations.
>
> Feedback encouraged.
>
> Cheers,
> Don
>
> ---
> # List of tests by subsystem

I think we should split this up into several files, partly to avoid
merge conflicts, partly to make it easy to maintain custom collections
of tests separately.

For example, fs.yaml could contain entries for both xfstests and fs
KUnit and selftests.

It's also probably going to be necessary to have separate sets of
tests for different use-cases. For example, there might be a smaller,
quicker set of tests to run on every patch, and a much longer, more
expensive set which only runs every other day. So I don't think
there'll even be a 1:1 mapping between 'test collections' (files) and
subsystems. But an automated way of running "this collection of tests"
would be very useful, particularly if it's more user-friendly than
just writing a shell script (e.g., having nicely formatted output,
being able to run things in parallel or remotely, etc).

> #
> # Tests should adhere to KTAP definitions for results
> #
> # Description of section entries
> #
> #  maintainer:    test maintainer - name <email>
> #  list:                mailing list for discussion
> #  version:         stable version of the test
> #  dependency: necessary distro package for testing
> #  test:
> #    path:            internal git path or url to fetch from
> #    cmd:            command to run; ability to run locally
> #    param:         additional param necessary to run test
> #  hardware:      hardware necessary for validation
> #
> # Subsystems (alphabetical)
>
> KUNIT TEST:

For KUnit, it'll be interesting to draw the distinction between KUnit
overall and individual KUnit suites.
I'd lean towards having a separate entry for each subsystem's KUnit
tests (including one for KUnit's own tests)

>   maintainer:
>     - name: name1
>       email: email1
>     - name: name2
>       email: email2
>   list:

How important is it to have these in the case where they're already in
the MAINTAINERS file? I can see it being important for tests which
live elsewhere, though eventually, I'd still prefer the subsystem
maintainer to take some responsibility for the tests run for their
subsystems.

>   version:

This field is probably unnecessary for test frameworks which live in
the kernel tree.

>   dependency:
>     - dep1
>     - dep2

If we want to automate this in any way, we're going to need to work
out a way of specifying these. Either we'd have to pick a distro's
package names, or have our own mapping.

(A part of me really likes the idea of having a small list of "known"
dependencies: python, docker, etc, and trying to limit tests to using
those dependencies. Though there are plenty of useful tests with more
complicated dependencies, so that probably won't fly forever.)

>   test:
>     - path: tools/testing/kunit
>       cmd:
>       param:
>     - path:
>       cmd:
>       param:

Is 'path' here supposed to be the path to the test binary, the working
directory, etc?
Maybe there should be 'working_directory', 'cmd', 'args', and 'env'.

>   hardware: none



For KUnit, I'd imagine having a kunit.yaml, with something like this,
including the KUnit tests in the 'kunit' and 'example' suites, and the
'kunit_tool_test.py' test script:

---
KUnit:
  maintainer:
    - name: David Gow
      email: davidgow@xxxxxxxxxx
    - name: Brendan Higgins
      email: brendan.higgins@xxxxxxxxx
  list: kunit-dev@xxxxxxxxxxxxxxxx
  dependency:
    - python3
  test:
    - path: .
      cmd: tools/testing/kunit.py
      param: run kunit
    - path: .
      cmd: tools/testing/kunit.py
      param: run example
  hardware: none
KUnit Tool:
  maintainer:
    - name: David Gow
      email: davidgow@xxxxxxxxxx
    - name: Brendan Higgins
      email: brendan.higgins@xxxxxxxxx
  list: kunit-dev@xxxxxxxxxxxxxxxx
  dependency:
    - python3
  test:
    - path: .
      cmd: tools/testing/kunit_tool_test.py
      param:
  hardware: none
---

Obviously there's still some redundancy there, and I've not actually
tried implementing something that could run it. It also lacks any
information about the environment. In practice, I have about 20
different kunit.py invocations which run the tests with different
configs and on different architectures. Though that might make sense
to keep in a separate file to only run if the simpler tests pass. And
equally, it'd be nice to have a 'common.yaml' file with basic patch
and build tests which apply to almost everything (checkpatch, make
defconfig, maybe even make allmodconfig, etc).

Cheers,
-- David

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux