> -----Original Message----- > From: automated-testing@xxxxxxxxxxxxxxxxxxxxxx <automated-testing@xxxxxxxxxxxxxxxxxxxxxx> On Behalf Of Don Zickus > Hi, > > At Linux Plumbers, a few dozen of us gathered together to discuss how > to expose what tests subsystem maintainers would like to run for every > patch submitted or when CI runs tests. We agreed on a mock up of a > yaml template to start gathering info. The yaml file could be > temporarily stored on kernelci.org until a more permanent home could > be found. Attached is a template to start the conversation. > Don, I'm interested in this initiative. Is discussion going to be on a kernel mailing list, or on this e-mail, or somewhere else? See a few comments below. > Longer story. > > The current problem is CI systems are not unanimous about what tests > they run on submitted patches or git branches. This makes it > difficult to figure out why a test failed or how to reproduce. > Further, it isn't always clear what tests a normal contributor should > run before posting patches. > > It has been long communicated that the tests LTP, xfstest and/or > kselftests should be the tests to run. Just saying "LTP" is not granular enough. LTP has hundreds of individual test programs, and it would be useful to specify the individual tests from LTP that should be run per sub-system. I was particularly intrigued by the presentation at Plumbers about test coverage. It would be nice to have data (or easily replicable methods) for determining the code coverage of a test or set of tests, to indicate what parts of the kernel are being missed and help drive new test development. > However, not all maintainers > use those tests for their subsystems. I am hoping to either capture > those tests or find ways to convince them to add their tests to the > preferred locations. > > The goal is for a given subsystem (defined in MAINTAINERS), define a > set of tests that should be run for any contributions to that > subsystem. The hope is the collective CI results can be triaged > collectively (because they are related) and even have the numerous > flakes waived collectively (same reason) improving the ability to > find and debug new test failures. Because the tests and process are > known, having a human help debug any failures becomes easier. > > The plan is to put together a minimal yaml template that gets us going > (even if it is not optimized yet) and aim for about a dozen or so > subsystems. At that point we should have enough feedback to promote > this more seriously and talk optimizations. Sounds like a good place to start. Do we have some candidate sub-systems in mind? Has anyone volunteered to lead the way? > > Feedback encouraged. > > Cheers, > Don > > --- > # List of tests by subsystem > # > # Tests should adhere to KTAP definitions for results > # > # Description of section entries > # > # maintainer: test maintainer - name <email> > # list: mailing list for discussion > # version: stable version of the test > # dependency: necessary distro package for testing > # test: > # path: internal git path or url to fetch from > # cmd: command to run; ability to run locally > # param: additional param necessary to run test > # hardware: hardware necessary for validation Is this something new in MAINTAINERS, or is it a separate file? > # > # Subsystems (alphabetical) > > KUNIT TEST: > maintainer: > - name: name1 > email: email1 > - name: name2 > email: email2 > list: > version: > dependency: > - dep1 > - dep2 > test: > - path: tools/testing/kunit > cmd: > param: > - path: > cmd: > param: > hardware: none Looks OK so far - it'd be nice to have a few concrete examples. -- Tim