Re: [PATCH 4/4] kunit: Prepare test plan for parameterized subtests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2 Oct 2023 at 21:55, Michal Wajdeczko
<michal.wajdeczko@xxxxxxxxx> wrote:
>
>
>
> On 28.09.2023 22:54, Rae Moar wrote:
> > On Mon, Sep 25, 2023 at 1:58 PM Michal Wajdeczko
> > <michal.wajdeczko@xxxxxxxxx> wrote:
> >>
> >> In case of parameterized tests we are not providing a test plan
> >> so we can't detect if any result is missing.
> >>
> >> Count available params using the same generator as during a test
> >> execution
> >>
> >> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@xxxxxxxxx>
> >> Cc: David Gow <davidgow@xxxxxxxxxx>
> >> Cc: Rae Moar <rmoar@xxxxxxxxxx>
> >> ---

<...snip...>

> >
> > Hello!
> >
> > This change largely looks good to me. However, I am not 100 percent
> > confident that the function to generate parameters always produces the
> > same output (or same number of test cases). I would be interested in
> > David's opinion on this.
>
> Right, it's not explicitly specified in KUNIT_CASE_PARAM nor
> test_case.generate_params documentation, but I would assume that while
> generating different output could be fine (and harmless to this patch),
> like based on a random seed, but IMO at the same time it should be
> prohibited to generate different number of params, as this would make
> harder to compare each execution for regression.

There are definitely some valid cases for parameterised tests to
generate different numbers of tests in different configs /
environments (e.g., there are some where the number of parameters
depends on the number of CPUs). That being said, it shouldn't be a
problem in a relatively standard test environment with any of the
tests we currently have.

Some of these issues can be got around by having tests be generated
regardless, but having those tests be skipped if required.

>
> Alternatively we can introduce some flag to indicate whether provided
> param generator is stable or not and then provide test plan only for the
> former.

I think this sounds like a good idea, and a use for the KUnit
attributes system. I'd thought that a 'deterministic' attribute would
make sense, but it may make sense to split it into a 'deterministic'
and 'fixed structure' flag (the latter only requiring that the number,
order, names, etc of tests and subtests be the same).

There have been a couple of people asking for a feature to
deliberately randomise test ordering, too. We'd want to clear these
flags if that's in use.
Of course, ideally anyone doing regression testing would be able to
use the test/parameter name/description instead of test number, so
ordering of tests shouldn't matter unless tests were buggy.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux