Re: Blockdev 6.13-rc lockdep splat regressions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2025-01-10 at 20:13 +0800, Ming Lei wrote:
> On Fri, Jan 10, 2025 at 11:12:58AM +0100, Thomas Hellström wrote:
> > Ming, Others
> > 
> > On 6.13-rc6 I'm seeing a couple of lockdep splats which appear
> > introduced by the commit
> > 
> > f1be1788a32e ("block: model freeze & enter queue as lock for
> > supporting
> > lockdep")
> 
> The freeze lock connects all kinds of sub-system locks, that is why
> we see lots of warnings after the commit is merged.
> 
> ...
> 
> > #1
> > [  399.006581]
> > ======================================================
> > [  399.006756] WARNING: possible circular locking dependency
> > detected
> > [  399.006767] 6.12.0-rc4+ #1 Tainted: G     U           N
> > [  399.006776] ----------------------------------------------------
> > --
> > [  399.006801] kswapd0/116 is trying to acquire lock:
> > [  399.006810] ffff9a67a1284a28 (&q->q_usage_counter(io)){++++}-
> > {0:0},
> > at: __submit_bio+0xf0/0x1c0
> > [  399.006845] 
> >                but task is already holding lock:
> > [  399.006856] ffffffff8a65bf00 (fs_reclaim){+.+.}-{0:0}, at:
> > balance_pgdat+0xe2/0xa20
> > [  399.006874] 
> 
> The above one is solved in for-6.14/block of block tree:
> 
> 	block: track queue dying state automatically for modeling
> queue freeze lockdep

Hmm. I applied this series:

https://patchwork.kernel.org/project/linux-block/list/?series=912824&archive=both

on top of -rc6, but it didn't resolve that splat. Am I using the
correct patches?

Perhaps it might be a good idea to reclaim-prime those lockdep maps
taken during reclaim to have the splats happen earlier.

Thanks,
Thomas


> 
> > 
> > #2:
> > [   81.960829]
> > ======================================================
> > [   81.961010] WARNING: possible circular locking dependency
> > detected
> > [   81.961048] 6.12.0-rc4+ #3 Tainted: G     U            
> 
> ...
> 
> >                -> #3 (&q->limits_lock){+.+.}-{4:4}:
> > [   81.967815]        __mutex_lock+0xad/0xb80
> > [   81.968133]        nvme_update_ns_info_block+0x128/0x870
> > [nvme_core]
> > [   81.968456]        nvme_update_ns_info+0x41/0x220 [nvme_core]
> > [   81.968774]        nvme_alloc_ns+0x8a6/0xb50 [nvme_core]
> > [   81.969090]        nvme_scan_ns+0x251/0x330 [nvme_core]
> > [   81.969401]        async_run_entry_fn+0x31/0x130
> > [   81.969703]        process_one_work+0x21a/0x590
> > [   81.970004]        worker_thread+0x1c3/0x3b0
> > [   81.970302]        kthread+0xd2/0x100
> > [   81.970603]        ret_from_fork+0x31/0x50
> > [   81.970901]        ret_from_fork_asm+0x1a/0x30
> > [   81.971195] 
> >                -> #2 (&q->q_usage_counter(io)){++++}-{0:0}:
> 
> The above dependency is killed by Christoph's patch.
> 
> 
> Thanks,
> Ming
> 






[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux