On 10/14/24 15:32, Donald Zickus wrote:
Hi,
At Linux Plumbers, a few dozen of us gathered together to discuss how
to expose what tests subsystem maintainers would like to run for every
patch submitted or when CI runs tests. We agreed on a mock up of a
yaml template to start gathering info. The yaml file could be
temporarily stored on kernelci.org until a more permanent home could
be found. Attached is a template to start the conversation.
Longer story.
The current problem is CI systems are not unanimous about what tests
they run on submitted patches or git branches. This makes it
difficult to figure out why a test failed or how to reproduce.
Further, it isn't always clear what tests a normal contributor should
run before posting patches.
It has been long communicated that the tests LTP, xfstest and/or
kselftests should be the tests to run. However, not all maintainers
use those tests for their subsystems. I am hoping to either capture
those tests or find ways to convince them to add their tests to the
preferred locations.
The goal is for a given subsystem (defined in MAINTAINERS), define a
set of tests that should be run for any contributions to that
subsystem. The hope is the collective CI results can be triaged
collectively (because they are related) and even have the numerous
flakes waived collectively (same reason) improving the ability to
find and debug new test failures. Because the tests and process are
known, having a human help debug any failures becomes easier.
The plan is to put together a minimal yaml template that gets us going
(even if it is not optimized yet) and aim for about a dozen or so
subsystems. At that point we should have enough feedback to promote
this more seriously and talk optimizations.
Feedback encouraged.
Cheers,
Don
---
# List of tests by subsystem
#
# Tests should adhere to KTAP definitions for results
#
# Description of section entries
#
# maintainer: test maintainer - name <email>
# list: mailing list for discussion
# version: stable version of the test
# dependency: necessary distro package for testing
# test:
# path: internal git path or url to fetch from
# cmd: command to run; ability to run locally
# param: additional param necessary to run test
# hardware: hardware necessary for validation
#
# Subsystems (alphabetical)
KUNIT TEST:
maintainer:
- name: name1
email: email1
- name: name2
email: email2
list:
version:
dependency:
- dep1
- dep2
test:
- path: tools/testing/kunit
cmd:
param:
- path:
cmd:
param:
hardware: none
Don,
thanks for initiating this! I have a few questions/suggestions:
I think the root element in a section (`KUNIT TEST` in your example) is
expected to be a container of multiple test definitions ( so there will
be one for LTP, KSelfTest, etc) -- can you confirm?
Assuming above is correct and `test` is a container of multiple test
definitions, can we add more properties to each:
* name -- would be a unique name id for each test
* description -- short description of the test.
* arch -- applicable platform architectures
* runtime -- This is subjective as it can be different for different
systems. but maybe we can have some generic names, like 'SHORT',
'MEDIUM', 'LONG', etc and each system may scale the timeout locally?
I see you have a `Subsystems` entry in comments section, but not in the
example. Do you expect it to be part of this file, or will there be a
file per each subsystem?
Can we define what we mean by a `test`? For me this is a group of one or
more individual testcases that can be initiated with a single
command-line, and is expected to run in a 'reasonable' time. Any other
thoughts?
Thanks!
Minas