Hello! On Thu, 13 Jun 2019 at 09:22, Dmitry Vyukov <dvyukov@xxxxxxxxxx> wrote: > On Wed, Jun 12, 2019 at 11:13 PM Daniel Díaz <daniel.diaz@xxxxxxxxxx> wrote: > > Maybe a precheck() on the tests in order to ensure that the needed > > binaries are around? > > Hi Daniel, > The Automated Testing effort: > https://elinux.org/Automated_Testing > is working on a standard for test metadata description which will > capture required configs, hardware, runtime-dependencies, etc. I am > not sure what's the current progress, though. We just had the monthly call one hour ago. You should join our next call! Details are in the Wiki link you shared. > Documenting or doing a precheck is a useful first step. But ultimately > this needs to be in machine-readable meta-data. So that it's possible > to, say, enable as much tests as possible on a CI, rather then simply > skip tests. A skipped test is better then a falsely failed test, but > it still does not give any test coverage. I agree. We discussed some of this in an impromptu microsummit at Linaro Connect BKK19 a few months back, i.e. a way to encapsulate tests and tests' definitions. Tim Bird is leading that effort; the minutes of today's call will be sent to the mailing list, so keep an eye on his update! > > [...] we, as part of LKFT [1], run Kselftests with > > Linux 4.4, 4.9, 4.14, 4.19, 5.1, Linus' mainline, and linux-next, on > > arm, aarch64, x86, and x86-64, *very* often: Our test counter recently > > exceeded 5 million! I was wrong by an order of magnitude: It's currently at 51.7 million tests. > > We do not build our kernels with KASAN, though, so our test runs don't > > exhibit that bug. > > But you are aware of KASAN, right? Do you have any plans to use it? Not at the moment. We are redesigning our entire build and test infrastructure, and this is something that we are considering for our next iteration. > If you are interested I can go into more details as we do lots of this > on syzbot. Besides catching more bugs there is also an interesting > possibility of systematically testing all error paths. Definitely join us on the Automated Testing monthly call; next one is July 11th. There are efforts on several fronts on testing the kernel, and we all are eager to contribute to improving the kernel test infrastructure. Greetings! Daniel Díaz daniel.diaz@xxxxxxxxxx