On Mon, 10 Feb 2025 at 19:57, Rasmus Villemoes <linux@xxxxxxxxxxxxxxxxxx> wrote: > > On Fri, Feb 07 2025, Tamir Duberstein <tamird@xxxxxxxxx> wrote: > > > On Fri, Feb 7, 2025 at 5:01 AM Rasmus Villemoes > > <linux@xxxxxxxxxxxxxxxxxx> wrote: > >> > >> On Thu, Feb 06 2025, Tamir Duberstein <tamird@xxxxxxxxx> wrote: > >> > >> > >> I'll have to see the actual code, of course. In general, I find reading > >> code using those KUNIT macros quite hard, because I'm not familiar with > >> those macros and when I try to look up what they do they turn out to be > >> defined in terms of other KUNIT macros 10 levels deep. > >> > >> But that still leaves a few points. First, I really like that "388 test > >> cases passed" tally or some other free-form summary (so that I can see > >> that I properly hooked up, compiled, and ran a new testcase inside > >> test_number(), so any kind of aggregation on those top-level test_* is > >> too coarse). > > > > This one I'm not sure how to address. What you're calling test cases > > here would typically be referred to as assertions, and I'm not aware > > of a way to report a count of assertions. > > > > I'm not sure that's accurate. > > The thing is, each of the current test() instances results in four > different tests being done, which is roughly why we end up at the 4*97 > == 388, but each of those tests has several assertions being done - > depending on which variant of the test we're doing (i.e. the buffer > length used or if we're passing it through kasprintf), we may do only > some of those assertions, and we do an early return in case one of those > assertions fail (because it wouldn't be safe to do the following > assertions, and the test as such has failed already). So there are far > more assertions than those 388. > > OTOH, that the number reported is 388 is more a consequence of the > implementation than anything explicitly designed. I can certainly live > with 388 being replaced by 97, i.e. that each current test() invocation > would count as one KUNIT case, as that would still allow me to detect a > PEBKAC when I've added a new test() instance and failed to actually run > that. It'd be possible to split things up further into tests, at the cost of it being a more extensive refactoring, if having the more granular count tracked by KUnit were desired. It'd also be possible to make these more explicitly data driven via a parameterised test (so each input/output pair is listed in an array, and automatically gets converted to a KUnit subtest). There are some advantages to having these counts done by the framework, particularly in that any inconsistencies can be picked up by the tooling. Ultimately, though, it's up to you as to what is most useful. > >> The other thing I want to know is if this would make it harder for me to > >> finish up that "deterministic random testing" thing I wrote [*], but > >> never got around to actually get it upstream. It seems like something > >> that a framework like kunit could usefully provide out-of-the-box (which > >> is why I attempted to get it into kselftest), but as long as I'd still > >> be able to add in something like that, and get a "xxx failed, random > >> seed used was 0xabcdef" line printed, and run the test again setting the > >> seed explicitly to that 0xabcdef value, I'm good. > >> > >> [*] https://lore.kernel.org/lkml/20201025214842.5924-4-linux@xxxxxxxxxxxxxxxxxx/ > > > > I can't speak for the framework, but it wouldn't be any harder to do > > in printf itself. I did it this way: > > > > +static struct rnd_state rnd_state; > > +static u64 seed; > > + > > static int printf_suite_init(struct kunit_suite *suite) > > { > > alloced_buffer = kmalloc(BUF_SIZE + 2*PAD_SIZE, GFP_KERNEL); > > if (!alloced_buffer) > > return -1; > > test_buffer = alloced_buffer + PAD_SIZE; > > + > > + seed = get_random_u64(); > > + prandom_seed_state(&rnd_state, seed); > > return 0; > > } > > > > static void printf_suite_exit(struct kunit_suite *suite) > > { > > kfree(alloced_buffer); > > + if (kunit_suite_has_succeeded(suite) == KUNIT_FAILURE) { > > + pr_info("Seed: %llu\n", seed); > > + } > > } > > > > and the result (once I made one of the cases fail): > > > > printf_kunit: Seed: 11480747578984087668 > > # printf: pass:27 fail:1 skip:0 total:28 > > # Totals: pass:27 fail:1 skip:0 total:28 > > not ok 1 printf > > > > OK, that's good. I think one of the problems previously was that there > no longer was such an _init/_exit pair one could hook into to do the > seed logic and afterwards do something depending on the success/fail of > the whole thing; that was all hidden away by some KUNIT_ wrapping. Yeah, KUnit has since added the suite_init/suite_exit functions in order to do this sort of thing. Previously we had an _init/_exit pair, but it was run per-test-case, which doesn't work as well here. > > Is it still possible to trivially make that seed into a module > parameter, and do the "modprobe test_printf seed=0xabcd", or otherwise > inject a module parameter when run/loaded via the kunit framework? It should be just the same as any other module. As mentioned, one day I'd like to standardise this in KUnit so that we can have this also change the test execution order and fit in with tooling, but I'd definitely support doing this via an ad-hoc parameter in the meantime. Cheers, -- David