On Tue, 2024-01-16 at 10:34 -0800, Jakub Kicinski wrote: > On Tue, 16 Jan 2024 18:40:49 +0100 Paolo Abeni wrote: > > On Tue, 2024-01-16 at 07:43 -0800, Jakub Kicinski wrote: > > > netdevsim tests aren't very well integrated with kselftest, > > > which has its advantages and disadvantages. > > > > Out of sheer ignorance I don't see the advantage?!? > > > > > But regardless > > > of the intended integration - a config file to know what kernel > > > to build is very useful, add one. > > > > With a complete integration we could more easily ask kbuild to generate > > automatically the kernel config suitable for testing; what about > > completing such integration? > > My bad, I didn't have the right words at my fingertips so I deleted > the explanation of advantages. > > make run_tests doesn't give us the ability to inject logic between > each test, AFAIU. The runner for netdevsim I typed up checks after > each test whether the VM has any crashes or things got otherwise > out of whack. And if so kills the VM and starts a new one to run > the next test. For make run_tests we can still more or less zero > in on which test caused an oops or crash, but the next test will > try to keep going. I see. > Even if we force kill it after we see a crash > I didn't see in the docs how to continue testing from a specific > point. I think something like the following should do: cd tools/testing/selftests make TARGETS="net drivers/net/bonding <...full relevant targets list>" O=<kst_dir> install cd <kst_dir> ARGS="" for t in $(./run_kselftest.sh -l| sed -n '/<test name>/,$p'); do ARGS="$ARGS -t $t" done ./run_kselftest.sh $ARGS # run all tests after <test name> Probably it would be nice to add to the kselftest runner the ability to check for kernel oops after each test and ev. stop. > So all in all, yeah, uniformity is good, the hacky approach kinda > works. Converting netdevsim to make run_tests is not a priority.. I agree, but also will put the all the above possible improvements in my wishlist ;) Cheers, Paolo