On 3/4/2025 9:55 AM, Paul E. McKenney wrote: > On Mon, Mar 03, 2025 at 11:08:24AM -0500, Joel Fernandes wrote: >> On Fri, Feb 28, 2025 at 01:13:56PM +0100, Uladzislau Rezki (Sony) wrote: >>> Currently kvfree_rcu() APIs use a system workqueue which is >>> "system_unbound_wq" to driver RCU machinery to reclaim a memory. >>> >>> Recently, it has been noted that the following kernel warning can >>> be observed: >>> >>> <snip> >>> workqueue: WQ_MEM_RECLAIM nvme-wq:nvme_scan_work is flushing !WQ_MEM_RECLAIM events_unbound:kfree_rcu_work >>> WARNING: CPU: 21 PID: 330 at kernel/workqueue.c:3719 check_flush_dependency+0x112/0x120 >>> Modules linked in: intel_uncore_frequency(E) intel_uncore_frequency_common(E) skx_edac(E) ... >>> CPU: 21 UID: 0 PID: 330 Comm: kworker/u144:6 Tainted: G E 6.13.2-0_g925d379822da #1 >>> Hardware name: Wiwynn Twin Lakes MP/Twin Lakes Passive MP, BIOS YMM20 02/01/2023 >>> Workqueue: nvme-wq nvme_scan_work >>> RIP: 0010:check_flush_dependency+0x112/0x120 >>> Code: 05 9a 40 14 02 01 48 81 c6 c0 00 00 00 48 8b 50 18 48 81 c7 c0 00 00 00 48 89 f9 48 ... >>> RSP: 0018:ffffc90000df7bd8 EFLAGS: 00010082 >>> RAX: 000000000000006a RBX: ffffffff81622390 RCX: 0000000000000027 >>> RDX: 00000000fffeffff RSI: 000000000057ffa8 RDI: ffff88907f960c88 >>> RBP: 0000000000000000 R08: ffffffff83068e50 R09: 000000000002fffd >>> R10: 0000000000000004 R11: 0000000000000000 R12: ffff8881001a4400 >>> R13: 0000000000000000 R14: ffff88907f420fb8 R15: 0000000000000000 >>> FS: 0000000000000000(0000) GS:ffff88907f940000(0000) knlGS:0000000000000000 >>> CR2: 00007f60c3001000 CR3: 000000107d010005 CR4: 00000000007726f0 >>> PKRU: 55555554 >>> Call Trace: >>> <TASK> >>> ? __warn+0xa4/0x140 >>> ? check_flush_dependency+0x112/0x120 >>> ? report_bug+0xe1/0x140 >>> ? check_flush_dependency+0x112/0x120 >>> ? handle_bug+0x5e/0x90 >>> ? exc_invalid_op+0x16/0x40 >>> ? asm_exc_invalid_op+0x16/0x20 >>> ? timer_recalc_next_expiry+0x190/0x190 >>> ? check_flush_dependency+0x112/0x120 >>> ? check_flush_dependency+0x112/0x120 >>> __flush_work.llvm.1643880146586177030+0x174/0x2c0 >>> flush_rcu_work+0x28/0x30 >>> kvfree_rcu_barrier+0x12f/0x160 >>> kmem_cache_destroy+0x18/0x120 >>> bioset_exit+0x10c/0x150 >>> disk_release.llvm.6740012984264378178+0x61/0xd0 >>> device_release+0x4f/0x90 >>> kobject_put+0x95/0x180 >>> nvme_put_ns+0x23/0xc0 >>> nvme_remove_invalid_namespaces+0xb3/0xd0 >>> nvme_scan_work+0x342/0x490 >>> process_scheduled_works+0x1a2/0x370 >>> worker_thread+0x2ff/0x390 >>> ? pwq_release_workfn+0x1e0/0x1e0 >>> kthread+0xb1/0xe0 >>> ? __kthread_parkme+0x70/0x70 >>> ret_from_fork+0x30/0x40 >>> ? __kthread_parkme+0x70/0x70 >>> ret_from_fork_asm+0x11/0x20 >>> </TASK> >>> ---[ end trace 0000000000000000 ]--- >>> <snip> >>> >>> To address this switch to use of independent WQ_MEM_RECLAIM >>> workqueue, so the rules are not violated from workqueue framework >>> point of view. >>> >>> Apart of that, since kvfree_rcu() does reclaim memory it is worth >>> to go with WQ_MEM_RECLAIM type of wq because it is designed for >>> this purpose. >>> >>> Cc: <stable@xxxxxxxxxxxxxxx> >>> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> >>> Cc: Keith Busch <kbusch@xxxxxxxxxx> >>> Closes: https://www.spinics.net/lists/kernel/msg5563270.html >>> Fixes: 6c6c47b063b5 ("mm, slab: call kvfree_rcu_barrier() from kmem_cache_destroy()"), >>> Reported-by: Keith Busch <kbusch@xxxxxxxxxx> >>> Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> >> >> BTW, there is a path in RCU-tasks that involves queuing work on system_wq >> which is !WQ_RECLAIM. While I don't anticipate an issue such as the one fixed >> by this patch, I am wondering if we should move these to their own WQ_RECLAIM >> queues for added robustness since otherwise that will result in CB invocation >> (And thus memory freeing delays). Paul? > > For RCU Tasks, the memory traffic has been much lower. But maybe someday > someone will drop a million trampolines all at once. But let's see that > problem before we fix some random problem that we believe will happen, > but which proves to be only slightly related to the problem that actually > does happen. ;-) > Fair enough. ;-) thanks, - Joel