> On Apr 14, 2022, at 3:49 PM, Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > On Wed, Apr 13, 2022 at 10:05 PM Mykola Lysenko <mykolal@xxxxxx> wrote: >> >> This is a pre-req to add separate logging for each subtest. >> >> Move all the mutable test data to the test_result struct. >> Move per-test init/de-init into the run_one_test function. >> Consolidate data aggregation and final log output in >> calculate_and_print_summary function. >> As a side effect, this patch fixes double counting of errors >> for subtests and possible duplicate output of subtest log >> on failures. >> >> Also, add prog_tests_framework.c test to verify some of the >> counting logic. >> >> As part of verification, confirmed that number of reported >> tests is the same before and after the change for both parallel >> and sequential test execution. >> >> Signed-off-by: Mykola Lysenko <mykolal@xxxxxx> >> --- > > The consolidation of the per-test logic in one place is great, thanks > for tackling this! But I tried this locally and understood what you > were mentioning as completely buffered output. It really sucks and is > a big step back, I think :( Thanks Andrii! Your points make sense, in the next revision I will keep this behavior untouched. > > Running sudo ./test_progs -j I see no output for a long while and only > then get entire output at the end: > > #239 xdp_noinline:OK > #240 xdp_perf:OK > #241 xdpwall:OK > > All error logs: > > #58 fexit_sleep:FAIL > test_fexit_sleep:PASS:fexit_skel_load 0 nsec > test_fexit_sleep:PASS:fexit_attach 0 nsec > test_fexit_sleep:PASS:clone 0 nsec > test_fexit_sleep:PASS:fexit_cnt 0 nsec > test_fexit_sleep:PASS:waitpid 0 nsec > test_fexit_sleep:PASS:exitstatus 0 nsec > test_fexit_sleep:FAIL:fexit_cnt 2 > Summary: 240/1156 PASSED, 34 SKIPPED, 1 FAILED > > > First, just not seeing the progress made me wonder for a good minute > or more whether something is just stuck and deadlock. Which is anxiety > inducing and I'd rather avoid this :) > > Second, as you can see, fexit_sleep actually failed (it does sometimes > in parallel mode). But I saw this only at the very end, while normally > I'd notice it as soon as it completes. In this case I know fexit_sleep > can fail and I'd ignore, but if there was some subtest that suddenly > breaks, I don't wait for all the tests to finish, I ctrl-C and go > investigate. Now I can't do that. > > How much of a problem is it to preserve old behavior of streaming > results of tests as they come, but consolidate duplicate logic in one > place? > >> .../bpf/prog_tests/prog_tests_framework.c | 55 ++++ >> tools/testing/selftests/bpf/test_progs.c | 301 +++++++----------- >> tools/testing/selftests/bpf/test_progs.h | 32 +- >> 3 files changed, 195 insertions(+), 193 deletions(-) >> create mode 100644 tools/testing/selftests/bpf/prog_tests/prog_tests_framework.c >> >> diff --git a/tools/testing/selftests/bpf/prog_tests/prog_tests_framework.c b/tools/testing/selftests/bpf/prog_tests/prog_tests_framework.c >> new file mode 100644 >> index 000000000000..7a5be06653f7 >> --- /dev/null >> +++ b/tools/testing/selftests/bpf/prog_tests/prog_tests_framework.c >> @@ -0,0 +1,55 @@ >> +// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) >> + >> +#include "test_progs.h" >> +#include "testing_helpers.h" >> + >> +static void clear_test_result(struct test_result *result) >> +{ >> + result->error_cnt = 0; >> + result->sub_succ_cnt = 0; >> + result->skip_cnt = 0; >> +} >> + >> +void test_prog_tests_framework(void) >> +{ >> + struct test_result *result = env.test_result; >> + >> + // in all the ASSERT calls below we need to return on the first >> + // error due to the fact that we are cleaning the test state after >> + // each dummy subtest >> + >> + // test we properly count skipped tests with subtests > > C++ comments, please use /* */ Will fix. > > >> + if (test__start_subtest("test_good_subtest")) >> + test__end_subtest(); >> + if (!ASSERT_EQ(result->skip_cnt, 0, "skip_cnt_check")) >> + return; >> + if (!ASSERT_EQ(result->error_cnt, 0, "error_cnt_check")) >> + return; >> + if (!ASSERT_EQ(result->subtest_num, 1, "subtest_num_check")) >> + return; >> + clear_test_result(result); >> + > > [...]