Re: [PATCH v3 4/5] selftests/mseal: add more tests for mmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 17, 2024 at 12:49:40PM -0700, Jeff Xu wrote:

> So it is not a problem with the MACRO, but where is it used ?

>         ret = sys_mseal(ptr, size);
>         FAIL_TEST_IF_FALSE(!ret);

> Take this example, it would be
> assert(!ret)

The problem is that the macro name is confusing and not terribly
consistent with how the rest of the selftests work.  The standard
kselftest result reporting is

	ksft_test_result(bool result, char *name_format, ...);

so the result of the test is a boolean flag which is passed in.  This
macro on the other hand sounds like a double negative so you have to
stop and think what the logic is, and it's not seen anywhere else so
nobody is going to be familiar with it.  The main thing this is doing is
burying a return statement in there, that's a bit surprising too.

I'll also note that these macros are resulting in broken kselftest
output, the name for a test has to be stable for automated systems to be
able to associate test results between runs but these print

                        ksft_test_result_fail("%s: line:%d\n",          \
                                                __func__, __LINE__);    \
                        return;                                         \

which includes the line number of the test in the name which is an
obvious problem, automated systems won't be able to tell that any two
failures are related to each other never mind the passing test.  We
should report why things failed but it's better to do that with a
ksft_print_msg(), ideally one that's directly readable rather than
requiring someone to go into the source code and look it up.

A more standard way to write what you've got here would be to have the
tests return a bool then have a runner loop which iterates over the
tests:

	struct {
		char *name;
		bool (*func)(void);
	} tests[];

	...

	for (i = 0; i < ARRAY_SIZE(tests); i++)
		ksft_test_result(tests[i].test(), tests[i].name);

then the tests can just have explicit return statements and don't need
to worry about logging anything other than diagnostics.

Depending on how much you need to share between tests you might also be
able to use kselftest_harness.h which fork()s each test into a separate
child and allows you to just fault to fail if that's easier.

> > We are writing unit tests in a test framework, let's use very well
> > established industry practices please.

Plus also the fact that we have a framework here...

> > Also note that you don't even need to reinvent the wheel, there is a
> > fully-featured test harness available in
> > tools/testing/selftests/kselftest_harness.h with both ASSERT_xxx() and
> > EXPECT_xxx() helpers.

> The EXPECT_xxx() doesn't take care of reporting though,  or maybe it

I rather think people would've noticed if the test harness was so broken
that it was unable to report failures.  If it is that broken we should
fix it rather than open coding something else.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux