On Fri, Oct 18, 2024 at 11:06:20AM -0700, Jeff Xu wrote: > On Fri, Oct 18, 2024 at 6:04 AM Mark Brown <broonie@xxxxxxxxxx> wrote: > > The problem is that the macro name is confusing and not terribly > > consistent with how the rest of the selftests work. The standard > > kselftest result reporting is > > ksft_test_result(bool result, char *name_format, ...); > > > > so the result of the test is a boolean flag which is passed in. This > > macro on the other hand sounds like a double negative so you have to > > stop and think what the logic is, and it's not seen anywhere else so > > nobody is going to be familiar with it. The main thing this is doing is > > burying a return statement in there, that's a bit surprising too. > Thanks for explaining the problem, naming is hard. Do you have a > suggestion on a better naming? Honestly I'd probably deal with this by refactoring such that the macro isn't needed and the tests follow a pattern more like: if (ret != 0) { ksft_print_msg("$ACTION failed with %d\n", ret); return false; } when they encouter a failure, the pattern I sketched in my earlier message, or switch to kselftest_harness.h (like I say I don't know if the fork()ing is an issue for these tests). If I had to have a macro it'd probably be something like mseal_assert(). > > I'll also note that these macros are resulting in broken kselftest > > output, the name for a test has to be stable for automated systems to be > > able to associate test results between runs but these print .... > > which includes the line number of the test in the name which is an > > obvious problem, automated systems won't be able to tell that any two > > failures are related to each other never mind the passing test. We > > should report why things failed but it's better to do that with a > > ksft_print_msg(), ideally one that's directly readable rather than > > requiring someone to go into the source code and look it up. > I don't know what the issue you described is ? Are you saying that we > are missing line numbers ? it is not. here is the sample of output: No, I'm saying that having the line numbers is a problem. > Failure in the second test case from last: > ok 105 test_munmap_free_multiple_ranges > not ok 106 test_munmap_free_multiple_ranges_with_split: line:2573 > ok 107 test_munmap_free_multiple_ranges_with_split Test 106 here is called "test_munmap_free_multiple_ranges_with_split: line:2573" which automated systems aren't going to be able to associate with the passing "test_munmap_free_multiple_ranges_with_split", nor with any failures that occur on any other lines in the function. > I would image the needs of something similar to FAIL_TEST_IF_FALSE is > common in selftest writing: > 1> lightweight enough so dev can pick up quickly and adapt to existing > tests, instead of rewriting everything from scratch. > 2> assert like syntax > 3> fail the current test case, but continue running the next test case > 4> take care of reporting test failures. Honestly this just sounds and looks like kselftest_harness.h, it's ASSERT_ and EXPECT_ macros sound exactly like what you're looking for for asserts. The main gotchas with it are that it's not particularly elegant for test cases which need to enumerate system features (which I don't think is the case for mseal()?) and that it runs each test case in a fork()ed child which can be inconvenient for some tests. The use of fork() is because that make the overall test program much more robust against breakage in individual tests and allows things like per test timeouts.
Attachment:
signature.asc
Description: PGP signature