On Tue, Dec 17, 2024 at 10:20:47PM +0800, kernel test robot wrote: > > > Hello, > > kernel test robot noticed "BUG:KASAN:slab-use-after-free_in__cpuhp_state_add_instance_cpuslocked" on: > > commit: 22465bbac53c821319089016f268a2437de9b00a ("blk-mq: move cpuhp callback registering out of q->sysfs_lock") > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master > > [test failed on linus/master 231825b2e1ff6ba799c5eaf396d3ab2354e37c6b] > [test failed on linux-next/master 3e42dc9229c5950e84b1ed705f94ed75ed208228] > > in testcase: blktests > version: blktests-x86_64-3617edd-1_20241105 > with following parameters: > > disk: 1SSD > test: block-group-01 > > > > config: x86_64-rhel-9.4-func > compiler: gcc-12 > test machine: 4 threads Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz (Skylake) with 32G memory > > (please refer to attached dmesg/kmsg for entire log/backtrace) > > > > If you fix the issue in a separate patch/commit (i.e. not just a new version of > the same patch/commit), kindly add following tags > | Reported-by: kernel test robot <oliver.sang@xxxxxxxxx> > | Closes: https://lore.kernel.org/oe-lkp/202412172217.b906db7c-lkp@xxxxxxxxx > > > [ 232.596698][ T3545] BUG: KASAN: slab-use-after-free in __cpuhp_state_add_instance_cpuslocked (include/linux/list.h:1026 kernel/cpu.c:2446) Hello, Thanks for the report! Unfortunately I can't reproduce it in my test VM by running 'blktests block/030' with: - two numa nodes - enable CONFIG_BLK_DEV_NULL_BLK_FAULT_INJECTION But just figured out that one freed hctx still may stay in cpuhp cb list, can you test the following patch? diff --git a/block/blk-mq.c b/block/blk-mq.c index 92e8ddf34575..f655b34efffe 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -4421,7 +4421,8 @@ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx( /* reuse dead hctx first */ spin_lock(&q->unused_hctx_lock); list_for_each_entry(tmp, &q->unused_hctx_list, hctx_list) { - if (tmp->numa_node == node) { + if (tmp->numa_node == node && + hlist_unhashed(&tmp->cpuhp_online)) { hctx = tmp; break; } thanks, Ming