On 8/10/2021 10:33 AM, Vlastimil Babka wrote: > On 8/9/21 3:41 PM, Qian Cai wrote: > >>> static void flush_all(struct kmem_cache *s) >>> { >>> - on_each_cpu_cond(has_cpu_slab, flush_cpu_slab, s, 1); >>> + struct slub_flush_work *sfw; >>> + unsigned int cpu; >>> + >>> + mutex_lock(&flush_lock); >> >> Vlastimil, taking the lock here could trigger a warning during memory offline/online due to the locking order: >> >> slab_mutex -> flush_lock > > Here's the full fixup, also incorporating Mike's fix. Thanks. > > ----8<---- > From c2df67d5116d4615c322e262556e34117e268104 Mon Sep 17 00:00:00 2001 > From: Vlastimil Babka <vbabka@xxxxxxx> > Date: Tue, 10 Aug 2021 10:58:07 +0200 > Subject: [PATCH] mm, slub: fix memory and cpu hotplug related lock ordering > issues > > Qian Cai reported [1] a lockdep splat on memory offline. > > [ 91.374541] WARNING: possible circular locking dependency detected > [ 91.381411] 5.14.0-rc5-next-20210809+ #84 Not tainted > [ 91.387149] ------------------------------------------------------ > [ 91.394016] lsbug/1523 is trying to acquire lock: > [ 91.399406] ffff800018e76530 (flush_lock){+.+.}-{3:3}, at: flush_all+0x50/0x1c8 > [ 91.407425] but task is already holding lock: > [ 91.414638] ffff800018e48468 (slab_mutex){+.+.}-{3:3}, at: slab_memory_callback+0x44/0x280 > [ 91.423603] which lock already depends on the new lock. > > To fix it, we need to change the order in flush_all() so that cpus_read_lock() > is first and mutex_lock(&flush_lock) second. > > Also when called from slab_mem_going_offline_callback() we are already under > cpus_read_lock() and cannot take it again, so create a flush_all_cpus_locked() > variant and decouple flushing from actual shrinking for this call path. > > Additionally, Mike Galbraith reported [2] wrong order of cpus_read_lock() and > slab_mutex in kmem_cache_destroy() path and proposed a fix to reverse it. > > This patch is a fixup for the mmotm patch > mm-slub-move-flush_cpu_slab-invocations-__free_slab-invocations-out-of-irq-context.patch > > [1] https://lore.kernel.org/lkml/0b36128c-3e12-77df-85fe-a153a714569b@xxxxxxxxxxx/ > [2] https://lore.kernel.org/lkml/2eb3cf340716c40f03a0a342ab40219b3d1de195.camel@xxxxxx/ > > Reported-by: Qian Cai <quic_qiancai@xxxxxxxxxxx> > Reported-by: Mike Galbraith <efault@xxxxxx> > Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> This is running fine for me. There is a separate hugetlb crash while fuzzing and will report to where it belongs.