Hello Ming, could you take a look in this WARNING? Using kernel v6.4-rc7, I observed blktests block/027 failure due to a lockdep WARN [1]. The failure can be reproduced in stable manner with my test systems, by running the test case after system reboot. The lockdep reported a lock named blkg_stat_lock. I noticed that the recent commit 20cb1c2fb756 ("blk-cgroup: Flush stats before releasing blkcg_gq") introduced the lock. I reverted the commit from v6.4-rc7 and observed the failure disappears. [1] WARNING: possible irq lock inversion dependency detected 6.4.0-rc7-kts #1 Not tainted -------------------------------------------------------- fio/10956 just changed the state of lock: ffffffff98da0a98 (blkg_stat_lock){+.-.}-{2:2}, at: __blkcg_rstat_flush.isra.0+0xe1/0x600 but this lock was taken by another, HARDIRQ-safe lock in the past: (per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu)){-.-.}-{2:2} and interrupts could create inverse lock ordering between them. other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(blkg_stat_lock); local_irq_disable(); lock(per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu)); lock(blkg_stat_lock); <Interrupt> lock(per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu)); *** DEADLOCK *** 2 locks held by fio/10956: #0: ffffffff98a3fe00 (rcu_callback){....}-{0:0}, at: rcu_do_batch+0x300/0xcd0 #1: ffffffff98a3ff20 (rcu_read_lock){....}-{1:2}, at: __blkcg_rstat_flush.isra.0+0x7d/0x600 the shortest dependencies between 2nd lock and 1st lock: -> (per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu)){-.-.}-{2:2} { IN-HARDIRQ-W at: lock_acquire+0x196/0x4b0 _raw_spin_lock_irqsave+0x47/0x70 cgroup_rstat_updated+0xbf/0x430 __cgroup_account_cputime_field+0xbb/0x170 account_system_index_time+0x1b3/0x2e0 update_process_times+0x26/0x140 tick_sched_handle+0x67/0x130 tick_sched_timer+0xad/0xd0 __hrtimer_run_queues+0x4a9/0x8d0 hrtimer_interrupt+0x2f1/0x810 __sysvec_apic_timer_interrupt+0x143/0x3f0 sysvec_apic_timer_interrupt+0x8a/0xb0 asm_sysvec_apic_timer_interrupt+0x16/0x20 _raw_spin_unlock_irqrestore+0x36/0x60 __wake_up_common_lock+0xd4/0x120 percpu_up_write+0x75/0x90 cgroup_procs_write_finish+0xad/0xe0 __cgroup_procs_write+0x23e/0x410 cgroup_procs_write+0x13/0x20 cgroup_file_write+0x1b2/0x730 kernfs_fop_write_iter+0x356/0x530 vfs_write+0x4c2/0xca0 ksys_write+0xe7/0x1b0 do_syscall_64+0x58/0x80 entry_SYSCALL_64_after_hwframe+0x72/0xdc IN-SOFTIRQ-W at: lock_acquire+0x196/0x4b0 _raw_spin_lock_irqsave+0x47/0x70 cgroup_rstat_updated+0xbf/0x430 __mod_memcg_state+0x9d/0x180 mod_memcg_state+0x3e/0x60 memcg_account_kmem+0x18/0x50 refill_obj_stock+0x430/0x740 kmem_cache_free+0x2a4/0x330 rcu_do_batch+0x34e/0xcd0 rcu_core+0x8a6/0xdd0 __do_softirq+0x1d7/0x857 __irq_exit_rcu+0xfe/0x260 irq_exit_rcu+0xa/0x30 sysvec_apic_timer_interrupt+0x8f/0xb0 asm_sysvec_apic_timer_interrupt+0x16/0x20 cpuidle_enter_state+0x29f/0x340 cpuidle_enter+0x4a/0xa0 do_idle+0x340/0x430 cpu_startup_entry+0x19/0x20 start_secondary+0x22f/0x2c0 __pfx_verify_cpu+0x0/0x10 INITIAL USE at: lock_acquire+0x196/0x4b0 _raw_spin_lock_irqsave+0x47/0x70 cgroup_rstat_flush_locked+0x124/0x10d0 cgroup_rstat_flush+0x38/0x50 do_flush_stats+0xa9/0x110 flush_memcg_stats_dwork+0xc/0x60 process_one_work+0x81f/0x1330 worker_thread+0x100/0x12c0 kthread+0x2e7/0x3c0 ret_from_fork+0x29/0x50 } ... key at: [<ffffffff9b85e500>] __key.0+0x0/0x40 ... acquired at: _raw_spin_lock+0x2f/0x40 __blkcg_rstat_flush.isra.0+0xe1/0x600 cgroup_rstat_flush_locked+0x724/0x10d0 cgroup_rstat_flush_atomic+0x23/0x40 do_flush_stats+0xeb/0x110 mem_cgroup_wb_stats+0x346/0x420