On Fri, Feb 7, 2025 at 5:01 AM Rasmus Villemoes <linux@xxxxxxxxxxxxxxxxxx> wrote: > > On Thu, Feb 06 2025, Tamir Duberstein <tamird@xxxxxxxxx> wrote: > > > On Thu, Feb 6, 2025 at 4:27 AM Rasmus Villemoes > > <linux@xxxxxxxxxxxxxxxxxx> wrote: > >> > >> On Tue, 4 Feb 2025 at 20:36, Tamir Duberstein <tamird@xxxxxxxxx> wrote: > >> > > >> > This is one of just 3 remaining "Test Module" kselftests (the others > >> > being bitmap and scanf), the rest having been converted to KUnit. > >> > > >> > I tested this using: > >> > > >> > $ tools/testing/kunit/kunit.py run --arch arm64 --make_options LLVM=1 printf > >> > > >> > I have also sent out a series converting scanf[0]. > >> > > >> > Link: https://lore.kernel.org/all/20250204-scanf-kunit-convert-v3-0-386d7c3ee714@xxxxxxxxx/T/#u [0] > >> > > >> > >> Sorry, but NAK, not in this form. > >> > >> Please read the previous threads to understand what is wrong with this > >> mechanical approach. Not only is it wrong, it also actively makes the > >> test suite much less useful. > >> > >> https://lore.kernel.org/lkml/f408efbd-10f7-f1dd-9baa-0f1233cafffc@xxxxxxxxxxxxxxxxxx/ > >> https://lore.kernel.org/lkml/876cc862-56f1-7330-f988-0248bec2fbad@xxxxxxxxxxxxxxxxxx/ > >> https://lore.kernel.org/lkml/0ab618c7-8c5c-00ae-8e08-0c1b99f3bf5c@xxxxxxxxxxxxxxxxxx/ > >> > >> I think the previous attempt was close to something acceptable (around > >> https://lore.kernel.org/lkml/57976ff4-7845-d721-ced1-1fe439000a9b@xxxxxxxxxxxxxxxxxx/), > >> but I don't know what happened to it. > >> > >> Rasmus > > > > Thanks Rasmus, I wasn't aware of that prior effort. I've gone through > > and adopted your comments - the result is a first patch that is much > > smaller (104 insertions(+), 104 deletions(-)) and failure messages > > that are quite close to what is emitted now. I've taken care to keep > > all the control flow the same, as you requested. The previous > > discussion concluded with a promise to send another patch which didn't > > happen. May I send a v2 with these changes, or are there more > > fundamental objections? I'll also cc Arpitha and Brendan. The new > > failure output: > > > > # ip4: EXPECTATION FAILED at lib/printf_kunit.c:95 > > vsnprintf(buf, 256, "%piS|%pIS", ...) wrote > > '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' > > # ip4: EXPECTATION FAILED at lib/printf_kunit.c:95 > > vsnprintf(buf, 19, "%piS|%pIS", ...) wrote '127.000.000.001|12', > > expected '127-000.000.001|12' > > # ip4: EXPECTATION FAILED at lib/printf_kunit.c:131 > > kvasprintf(..., "%piS|%pIS", ...) returned > > '127.000.000.001|127.0.0.1', expected '127-000.000.001|127.0.0.1' > > That certainly addresses one of my main objections; I really don't want to see > "memcmp(..., ...) == 1, expected memcmp(..., ...) == 0". And you said > you've kept the control flow/early returns the same, so that should also > be ok. > > I'll have to see the actual code, of course. In general, I find reading > code using those KUNIT macros quite hard, because I'm not familiar with > those macros and when I try to look up what they do they turn out to be > defined in terms of other KUNIT macros 10 levels deep. > > But that still leaves a few points. First, I really like that "388 test > cases passed" tally or some other free-form summary (so that I can see > that I properly hooked up, compiled, and ran a new testcase inside > test_number(), so any kind of aggregation on those top-level test_* is > too coarse). This one I'm not sure how to address. What you're calling test cases here would typically be referred to as assertions, and I'm not aware of a way to report a count of assertions. > The other thing I want to know is if this would make it harder for me to > finish up that "deterministic random testing" thing I wrote [*], but > never got around to actually get it upstream. It seems like something > that a framework like kunit could usefully provide out-of-the-box (which > is why I attempted to get it into kselftest), but as long as I'd still > be able to add in something like that, and get a "xxx failed, random > seed used was 0xabcdef" line printed, and run the test again setting the > seed explicitly to that 0xabcdef value, I'm good. > > [*] https://lore.kernel.org/lkml/20201025214842.5924-4-linux@xxxxxxxxxxxxxxxxxx/ I can't speak for the framework, but it wouldn't be any harder to do in printf itself. I did it this way: +static struct rnd_state rnd_state; +static u64 seed; + static int printf_suite_init(struct kunit_suite *suite) { alloced_buffer = kmalloc(BUF_SIZE + 2*PAD_SIZE, GFP_KERNEL); if (!alloced_buffer) return -1; test_buffer = alloced_buffer + PAD_SIZE; + + seed = get_random_u64(); + prandom_seed_state(&rnd_state, seed); return 0; } static void printf_suite_exit(struct kunit_suite *suite) { kfree(alloced_buffer); + if (kunit_suite_has_succeeded(suite) == KUNIT_FAILURE) { + pr_info("Seed: %llu\n", seed); + } } and the result (once I made one of the cases fail): printf_kunit: Seed: 11480747578984087668 # printf: pass:27 fail:1 skip:0 total:28 # Totals: pass:27 fail:1 skip:0 total:28 not ok 1 printf Thank you both for engaging with me here. I'll send v2 in a few minutes and we can continue the discussion there. Tamir