Re: [PATCH v3 4/5] selftests/mseal: add more tests for mmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/18/24 13:32, Jeff Xu wrote:
Hi Mark

On Fri, Oct 18, 2024 at 11:37 AM Mark Brown <broonie@xxxxxxxxxx> wrote:

On Fri, Oct 18, 2024 at 11:06:20AM -0700, Jeff Xu wrote:
On Fri, Oct 18, 2024 at 6:04 AM Mark Brown <broonie@xxxxxxxxxx> wrote:

The problem is that the macro name is confusing and not terribly
consistent with how the rest of the selftests work.  The standard
kselftest result reporting is

         ksft_test_result(bool result, char *name_format, ...);

so the result of the test is a boolean flag which is passed in.  This
macro on the other hand sounds like a double negative so you have to
stop and think what the logic is, and it's not seen anywhere else so
nobody is going to be familiar with it.  The main thing this is doing is
burying a return statement in there, that's a bit surprising too.

Thanks for explaining the problem, naming is hard. Do you have a
suggestion on a better naming?

Honestly I'd probably deal with this by refactoring such that the macro
isn't needed and the tests follow a pattern more like:

         if (ret != 0) {
                 ksft_print_msg("$ACTION failed with %d\n", ret);
                 return false;
         }

So expanding the macro to actually code ?
But this makes the meal_test  quite large with lots of "if", and I
would rather avoid that.


when they encouter a failure, the pattern I sketched in my earlier
message, or switch to kselftest_harness.h (like I say I don't know if
the fork()ing is an issue for these tests).  If I had to have a macro
it'd probably be something like mseal_assert().

I can go with mseal_assert, the original macro is used  by mseal_test
itself, and only intended as such.

If changing name to mseal_assert() is acceptable, this seems to be a
minimum change and I'm happy with that.

I'll also note that these macros are resulting in broken kselftest
output, the name for a test has to be stable for automated systems to be
able to associate test results between runs but these print

....

which includes the line number of the test in the name which is an
obvious problem, automated systems won't be able to tell that any two
failures are related to each other never mind the passing test.  We
should report why things failed but it's better to do that with a
ksft_print_msg(), ideally one that's directly readable rather than
requiring someone to go into the source code and look it up.

I don't know what  the issue you described is ? Are you saying that we
are missing line numbers ? it is not. here is the sample of output:

No, I'm saying that having the line numbers is a problem.

Failure in the second test case from last:

ok 105 test_munmap_free_multiple_ranges
not ok 106 test_munmap_free_multiple_ranges_with_split: line:2573
ok 107 test_munmap_free_multiple_ranges_with_split

Test 106 here is called "test_munmap_free_multiple_ranges_with_split:
line:2573" which automated systems aren't going to be able to associate
with the passing "test_munmap_free_multiple_ranges_with_split", nor with
any failures that occur on any other lines in the function.

I see. That will happen when those tests are modified and line number
changes. I could see reasoning for this argument, especially when
those tests are flaky and get updated often.

In practice, I hope any of those kernel self-test failures should get
fixed immediately, or even better, run before dev submitting the patch
that affects the mm area.

Having line number does help dev to go to error directly, and I'm not
against filling in the "action" field, but you might also agree with
me, finding unique text for each error would require some decent
amount of time, especially for large tests such as mseal_test.

I would image the needs of something similar to FAIL_TEST_IF_FALSE is
common in selftest writing:

1> lightweight enough so dev can pick up quickly and adapt to existing
tests, instead of rewriting everything from scratch.
2> assert like syntax
3> fail the current test case, but continue running the next test case
4> take care of reporting test failures.

Honestly this just sounds and looks like kselftest_harness.h, it's
ASSERT_ and EXPECT_ macros sound exactly like what you're looking for
for asserts.  The main gotchas with it are that it's not particularly
elegant for test cases which need to enumerate system features (which I
don't think is the case for mseal()?) and that it runs each test case in
a fork()ed child which can be inconvenient for some tests.  The use of
fork() is because that makes the overall test program much more robust
against breakage in individual tests and allows things like per test
timeouts.
OK, I didn't know that ASSERT_ and EXPECT_ were part of the test fixture.

If I  switch to test_fixture, e,g, using TEST(test_name)

how do I pass the "seal" flag to it ?

Doesn't TH_LOG() work for you to pass arguments?

e.g. how do I run the same test twice, first seal = true, and second seal=false.

         test_seal_mmap_shrink(false);
         test_seal_mmap_shrink(true);

The example [1], isn't clear about that.

https://www.kernel.org/doc/html/v4.19/dev-tools/kselftest.html#example

Thanks


thanks,
-- Shuah





[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux