From: Abel Wu <wuyun.wu@xxxxxxxxxx> Subject: mm/slub: make add_full() condition more explicit The commit below is incomplete, as it didn't handle the add_full() part. commit a4d3f8916c65 ("slub: remove useless kmem_cache_debug() before remove_full()") This patch checks for SLAB_STORE_USER instead of kmem_cache_debug(), since that should be the only context in which we need the list_lock for add_full(). Link: https://lkml.kernel.org/r/20200811020240.1231-1-wuyun.wu@xxxxxxxxxx Signed-off-by: Abel Wu <wuyun.wu@xxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Liu Xiang <liu.xiang6@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) --- a/mm/slub.c~mm-slub-make-add_full-condition-more-explicit +++ a/mm/slub.c @@ -2245,7 +2245,8 @@ redo: } } else { m = M_FULL; - if (kmem_cache_debug(s) && !lock) { +#ifdef CONFIG_SLUB_DEBUG + if ((s->flags & SLAB_STORE_USER) && !lock) { lock = 1; /* * This also ensures that the scanning of full @@ -2254,6 +2255,7 @@ redo: */ spin_lock(&n->list_lock); } +#endif } if (l != m) { _