Hi John, Test FAILs when there is an unexpected condition during test/subtest execution, developer does not control it. Hence we propagate FAIL subtest result to be the test result, test_progs result and consequently CI result. On the other hand, SKIP state is fully controlled by us. E.g. we decide when particular subtest/test should be skipped. We do not propagate SKIP state to the test_progs result. test_progs result can either be OK or FAIL. Also, SKIPPED subtest is not an indication of a problem in a test. Hence, I do not think one SKIPPED subtest should mark the whole test as SKIPPED. For example, core_reloc_btfgen has 77 subtests (https://github.com/kernel-patches/bpf/actions/runs/3349035937/jobs/5548924891#step:6:4895). Some of them are skipped right now. However, most of them are passing. It is a normal state. For me, marking core_reloc_btfgen as SKIP would mean that something is not right with the whole test. Also, I do not think we are reviewing SKIP tests / subtests right now. Maybe we should. But this would be orthogonal discussion to this patch. > On Oct 28, 2022, at 4:24 PM, John Fastabend <john.fastabend@xxxxxxxxx> wrote: > > Domenico Cerasuolo wrote: >> From: Domenico Cerasuolo <dceras@xxxxxxxx> >> >> When showing the result of a test group, if one >> of the subtests was skipped, while still having >> passing subtets, the group result was marked as >> SKIPPED. >> >> #223/1 usdt/basic:SKIP >> #223/2 usdt/multispec:OK >> #223 usdt:SKIP >> >> With this change only if all of the subtests >> were skipped the group test is marked as SKIPPED. >> >> #223/1 usdt/basic:SKIP >> #223/2 usdt/multispec:OK >> #223 usdt:OK > > I'm not sure don't you want to know that some of the tests > were skipped? With this change its not knowable from output > if everything passed or one passed. > > I would prefer the behavior: If anything fails return > FAIL, else if anything is skipped SKIP and if _everything_ > passes mark it OK. > > My preference is to drop this change. > >> >> Signed-off-by: Domenico Cerasuolo <dceras@xxxxxxxx> >> --- >> tools/testing/selftests/bpf/test_progs.c | 11 +++++++++-- >> 1 file changed, 9 insertions(+), 2 deletions(-) >> >> diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c >> index 0e9a47f97890..14b70393018b 100644 >> --- a/tools/testing/selftests/bpf/test_progs.c >> +++ b/tools/testing/selftests/bpf/test_progs.c >> @@ -222,6 +222,11 @@ static char *test_result(bool failed, bool skipped) >> return failed ? "FAIL" : (skipped ? "SKIP" : "OK"); >> } >> >> +static char *test_group_result(int tests_count, bool failed, int skipped) >> +{ >> + return failed ? "FAIL" : (skipped == tests_count ? "SKIP" : "OK"); >> +} >> + >> static void print_test_log(char *log_buf, size_t log_cnt) >> { >> log_buf[log_cnt] = '\0'; >> @@ -308,7 +313,8 @@ static void dump_test_log(const struct prog_test_def *test, >> } >> >> print_test_name(test->test_num, test->test_name, >> - test_result(test_failed, test_state->skip_cnt)); >> + test_group_result(test_state->subtest_num, >> + test_failed, test_state->skip_cnt)); >> } >> >> static void stdio_restore(void); >> @@ -1071,7 +1077,8 @@ static void run_one_test(int test_num) >> >> if (verbose() && env.worker_id == -1) >> print_test_name(test_num + 1, test->test_name, >> - test_result(state->error_cnt, state->skip_cnt)); >> + test_group_result(state->subtest_num, >> + state->error_cnt, state->skip_cnt)); >> >> reset_affinity(); >> restore_netns(); >> -- >> 2.30.2 >> > >