From: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> Commit c65c1877bd68 ("slub: use lockdep_assert_held") requires remove_partial() to be called with n->list_lock held, but free_partial() called from kmem_cache_close() on cache destruction does not follow this rule, leading to a warning: WARNING: CPU: 0 PID: 2787 at mm/slub.c:1536 __kmem_cache_shutdown+0x1b2/0x1f0() Modules linked in: CPU: 0 PID: 2787 Comm: modprobe Tainted: G W 3.14.0-rc1-mm1+ #1 Hardware name: 0000000000000600 ffff88003ae1dde8 ffffffff816d9583 0000000000000600 0000000000000000 ffff88003ae1de28 ffffffff8107c107 0000000000000000 ffff880037ab2b00 ffff88007c240d30 ffffea0001ee5280 ffffea0001ee52a0 Call Trace: [<ffffffff816d9583>] dump_stack+0x51/0x6e [<ffffffff8107c107>] warn_slowpath_common+0x87/0xb0 [<ffffffff8107c145>] warn_slowpath_null+0x15/0x20 [<ffffffff811c7fe2>] __kmem_cache_shutdown+0x1b2/0x1f0 [<ffffffff811908d3>] kmem_cache_destroy+0x43/0xf0 [<ffffffffa013a123>] xfs_destroy_zones+0x103/0x110 [xfs] [<ffffffffa0192b54>] exit_xfs_fs+0x38/0x4e4 [xfs] [<ffffffff811036fa>] SyS_delete_module+0x19a/0x1f0 [<ffffffff816dfcd8>] ? retint_swapgs+0x13/0x1b [<ffffffff810d2125>] ? trace_hardirqs_on_caller+0x105/0x1d0 [<ffffffff81359efe>] ? trace_hardirqs_on_thunk+0x3a/0x3f [<ffffffff816e8539>] system_call_fastpath+0x16/0x1b Although this cannot actually result in a race, because on cache destruction there should not be any concurrent frees or allocations from the cache, let's add spin_lock/unlock to free_partial() just to keep lockdep happy. Acked-by: Christoph Lameter <cl@xxxxxxxxx> Signed-off-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> Signed-off-by: Pekka Enberg <penberg@xxxxxxxxxx> Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx> --- mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c --- a/mm/slub.c +++ b/mm/slub.c @@ -3195,12 +3195,13 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page, /* * Attempt to free all partial slabs on a node. * This is called from kmem_cache_close(). We must be the last thread - * using the cache and therefore we do not need to lock anymore. + * using the cache, but we still have to lock for lockdep's sake. */ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) { struct page *page, *h; + spin_lock_irq(&n->list_lock); list_for_each_entry_safe(page, h, &n->partial, lru) { if (!page->inuse) { __remove_partial(n, page); @@ -3210,6 +3211,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n) "Objects remaining in %s on kmem_cache_close()"); } } + spin_unlock_irq(&n->list_lock); } /* -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>