On Sun, Oct 30, 2022 at 7:05 AM David Gow <davidgow@xxxxxxxxxx> wrote: > > On Sat, Oct 29, 2022 at 5:03 AM Daniel Latypov <dlatypov@xxxxxxxxxx> wrote: > > > > E.g. all the hw_breakpoint tests are failing right now. > > So if I run `kunit.py run --altests --arch=x86_64`, then I see > > > Testing complete. Ran 408 tests: passed: 392, failed: 9, skipped: 7 > > > > Seeing which 9 tests failed out of the hundreds is annoying. > > If my terminal doesn't have scrollback support, I have to resort to > > looking at `.kunit/test.log` for the `not ok` lines. > > > > Teach kunit.py to print a summarized list of failures if the # of tests > > reachs an arbitrary threshold (>=100 tests). > > > > To try and keep the output from being too long/noisy, this new logic > > a) just reports "parent_test failed" if every child test failed > > b) won't print anything if there are >10 failures (also arbitrary). > > > > With this patch, we get an extra line of output showing: > > > Testing complete. Ran 408 tests: passed: 392, failed: 9, skipped: 7 > > > Failures: hw_breakpoint > > > > This also works with parameterized tests, e.g. if I add a fake failure > > > Failures: kcsan.test_atomic_builtins_missing_barrier.threads=6 > > > > Note: we didn't have enough tests for this to be a problem before. > > But with commit 980ac3ad0512 ("kunit: tool: rename all_test_uml.config, > > use it for --alltests"), --alltests works and thus running >100 tests > > will probably become more common. > > > > Signed-off-by: Daniel Latypov <dlatypov@xxxxxxxxxx> > > --- > > I like it! I do think we'll ultimately want some more options for the > main results display as well (e.g., only display failed tests, limit > the depth of nested results, etc), but this would be useful even then, > as the number of tests displayed could still be large. (And you might > not know what failures you'd be looking for in advance.) > > Reviewed-by: David Gow <davidgow@xxxxxxxxxx> Agreed, there's a lot of room to play around with the main output. The hope here is this is enough to tide us over (usability-wise) until we get around to that. E.g. in the future, it might make sense to only print suite names by default. If subtests (test cases and individual parameters) fail, we could print those in expanded detail. But there's obviously tradeoffs: * the real time output is nice, esp. since some test cases are slower than others * I think most people are only running 1-2 suites at a time right now Another thing we could do is optionally use \r to use only the last few lines for in-progress output? E.g. t=1 Running suite: example [PASSED] example_simple_test t=2, use \r to update the test case line Running suite: example [SKIPPED] example_skip_test Then we could print out the results of interest in more detail at the end.