On 4/1/21 11:24 PM, Marco Elver wrote: > On Thu, 1 Apr 2021 at 21:04, Daniel Latypov <dlatypov@xxxxxxxxxx> wrote: >> > } >> > #else >> > static inline bool slab_add_kunit_errors(void) { return false; } >> > #endif >> > >> > And anywhere you want to increase the error count, you'd call >> > slab_add_kunit_errors(). >> > >> > Another benefit of this approach is that if KUnit is disabled, there is >> > zero overhead and no additional code generated (vs. the current >> > approach). >> >> The resource approach looks really good, but... >> You'd be picking up a dependency on >> https://lore.kernel.org/linux-kselftest/20210311152314.3814916-2-dlatypov@xxxxxxxxxx/ >> current->kunit_test will always be NULL unless CONFIG_KASAN=y && >> CONFIG_KUNIT=y at the moment. >> My patch drops the CONFIG_KASAN requirement and opens it up to all tests. > > Oh, that's a shame, but hopefully it'll be in -next soon. > >> At the moment, it's just waiting another look over from Brendan or David. >> Any ETA on that, folks? :) >> >> So if you don't want to get blocked on that for now, I think it's fine to add: >> #ifdef CONFIG_SLUB_KUNIT_TEST >> int errors; >> #endif > > Until kunit fixes setting current->kunit_test, a cleaner workaround > that would allow to do the patch with kunit_resource, is to just have > an .init/.exit function that sets it ("current->kunit_test = test;"). > And then perhaps add a note ("FIXME: ...") to remove it once the above > patch has landed. > > At least that way we get the least intrusive change for mm/slub.c, and > the test is the only thing that needs a 2-line patch to clean up > later. So when testing internally Oliver's new version with your suggestions (thanks again for those), I got lockdep splats because slab_add_kunit_errors is called also from irq disabled contexts, and kunit_find_named_resource will call spin_lock(&test->lock) that's not irq safe. Can we make the lock irq safe? I tried the change below and it makde the problem go away. If you agree, the question is how to proceed - make it part of Oliver's patch series and let Andrew pick it all with eventually kunit team's acks on this patch, or whatnot. ----8<---- commit ab28505477892e9824c57ac338c88aec2ec0abce Author: Vlastimil Babka <vbabka@xxxxxxx> Date: Tue Apr 6 12:28:07 2021 +0200 kunit: make test->lock irq safe diff --git a/include/kunit/test.h b/include/kunit/test.h index 49601c4b98b8..524d4789af22 100644 --- a/include/kunit/test.h +++ b/include/kunit/test.h @@ -515,8 +515,9 @@ kunit_find_resource(struct kunit *test, void *match_data) { struct kunit_resource *res, *found = NULL; + unsigned long flags; - spin_lock(&test->lock); + spin_lock_irqsave(&test->lock, flags); list_for_each_entry_reverse(res, &test->resources, node) { if (match(test, res, (void *)match_data)) { @@ -526,7 +527,7 @@ kunit_find_resource(struct kunit *test, } } - spin_unlock(&test->lock); + spin_unlock_irqrestore(&test->lock, flags); return found; } diff --git a/lib/kunit/test.c b/lib/kunit/test.c index ec9494e914ef..2c62eeb45b82 100644 --- a/lib/kunit/test.c +++ b/lib/kunit/test.c @@ -442,6 +442,7 @@ int kunit_add_resource(struct kunit *test, void *data) { int ret = 0; + unsigned long flags; res->free = free; kref_init(&res->refcount); @@ -454,10 +455,10 @@ int kunit_add_resource(struct kunit *test, res->data = data; } - spin_lock(&test->lock); + spin_lock_irqsave(&test->lock, flags); list_add_tail(&res->node, &test->resources); /* refcount for list is established by kref_init() */ - spin_unlock(&test->lock); + spin_unlock_irqrestore(&test->lock, flags); return ret; } @@ -515,9 +516,11 @@ EXPORT_SYMBOL_GPL(kunit_alloc_and_get_resource); void kunit_remove_resource(struct kunit *test, struct kunit_resource *res) { - spin_lock(&test->lock); + unsigned long flags; + + spin_lock_irqsave(&test->lock, flags); list_del(&res->node); - spin_unlock(&test->lock); + spin_unlock_irqrestore(&test->lock, flags); kunit_put_resource(res); } EXPORT_SYMBOL_GPL(kunit_remove_resource); @@ -597,6 +600,7 @@ EXPORT_SYMBOL_GPL(kunit_kfree); void kunit_cleanup(struct kunit *test) { struct kunit_resource *res; + unsigned long flags; /* * test->resources is a stack - each allocation must be freed in the @@ -608,9 +612,9 @@ void kunit_cleanup(struct kunit *test) * protect against the current node being deleted, not the next. */ while (true) { - spin_lock(&test->lock); + spin_lock_irqsave(&test->lock, flags); if (list_empty(&test->resources)) { - spin_unlock(&test->lock); + spin_unlock_irqrestore(&test->lock, flags); break; } res = list_last_entry(&test->resources, @@ -621,7 +625,7 @@ void kunit_cleanup(struct kunit *test) * resource, and this can't happen if the test->lock * is held. */ - spin_unlock(&test->lock); + spin_unlock_irqrestore(&test->lock, flags); kunit_remove_resource(test, res); } #if (IS_ENABLED(CONFIG_KASAN) && IS_ENABLED(CONFIG_KUNIT))