Re: Blockdev 6.13-rc lockdep splat regressions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.

On Mon, 2025-01-13 at 08:55 +0800, Ming Lei wrote:
> On Sun, Jan 12, 2025 at 06:44:53PM +0100, Thomas Hellström wrote:
> > On Sun, 2025-01-12 at 23:50 +0800, Ming Lei wrote:
> > > On Sun, Jan 12, 2025 at 12:33:13PM +0100, Thomas Hellström wrote:
> > > > On Sat, 2025-01-11 at 11:05 +0800, Ming Lei wrote:
> > > 
> > > ...
> > > 
> > > > 
> > > > Ah, You're right, it's a different warning this time. Posted
> > > > the
> > > > warning below. (Note: This is also with Christoph's series
> > > > applied
> > > > on
> > > > top).
> > > > 
> > > > May I also humbly suggest the following lockdep priming to be
> > > > able
> > > > to
> > > > catch the reclaim lockdep splats early without reclaim needing
> > > > to
> > > > happen. That will also pick up splat #2 below.
> > > > 
> > > > 8<-------------------------------------------------------------
> > > > 
> > > > diff --git a/block/blk-core.c b/block/blk-core.c
> > > > index 32fb28a6372c..2dd8dc9aed7f 100644
> > > > --- a/block/blk-core.c
> > > > +++ b/block/blk-core.c
> > > > @@ -458,6 +458,11 @@ struct request_queue
> > > > *blk_alloc_queue(struct
> > > > queue_limits *lim, int node_id)
> > > >  
> > > >         q->nr_requests = BLKDEV_DEFAULT_RQ;
> > > >  
> > > > +       fs_reclaim_acquire(GFP_KERNEL);
> > > > +       rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_);
> > > > +       rwsem_release(&q->io_lockdep_map, _RET_IP_);
> > > > +       fs_reclaim_release(GFP_KERNEL);
> > > > +
> > > >         return q;
> > > 
> > > Looks one nice idea for injecting fs_reclaim, maybe it can be
> > > added to inject framework?
> > 
> > For the intel gpu drivers, we typically always prime lockdep like
> > this
> > if we *know* that the lock will be grabbed during reclaim, like if
> > it's
> > part of shrinker processing or similar. 
> > 
> > So sooner or later we *know* this sequence will happen so we add it
> > near the lock initialization to always be executed when the
> > lock(map)
> > is initialized.
> > 
> > So I don't really see a need for them to be periodially injected?
> 
> What I suggested is to add the verification for every allocation with
> direct reclaim by one kernel config which depends on both lockdep and
> fault inject.

> 
> > 
> > > 
> > > >  
> > > >  fail_stats:
> > > > 
> > > > 8<-------------------------------------------------------------
> > > > 
> > > > #1:
> > > >   106.921533]
> > > > ======================================================
> > > > [  106.921716] WARNING: possible circular locking dependency
> > > > detected
> > > > [  106.921725] 6.13.0-rc6+ #121 Tainted: G     U            
> > > > [  106.921734] ------------------------------------------------
> > > > ----
> > > > --
> > > > [  106.921743] kswapd0/117 is trying to acquire lock:
> > > > [  106.921751] ffff8ff4e2da09f0 (&q-
> > > > >q_usage_counter(io)){++++}-
> > > > {0:0},
> > > > at: __submit_bio+0x80/0x220
> > > > [  106.921769] 
> > > >                but task is already holding lock:
> > > > [  106.921778] ffffffff8e65e1c0 (fs_reclaim){+.+.}-{0:0}, at:
> > > > balance_pgdat+0xe2/0xa10
> > > > [  106.921791] 
> > > >                which lock already depends on the new lock.
> > > > 
> > > > [  106.921803] 
> > > >                the existing dependency chain (in reverse order)
> > > > is:
> > > > [  106.921814] 
> > > >                -> #1 (fs_reclaim){+.+.}-{0:0}:
> > > > [  106.921824]        fs_reclaim_acquire+0x9d/0xd0
> > > > [  106.921833]        __kmalloc_cache_node_noprof+0x5d/0x3f0
> > > > [  106.921842]        blk_mq_init_tags+0x3d/0xb0
> > > > [  106.921851]        blk_mq_alloc_map_and_rqs+0x4e/0x3d0
> > > > [  106.921860]        blk_mq_init_sched+0x100/0x260
> > > > [  106.921868]        elevator_switch+0x8d/0x2e0
> > > > [  106.921877]        elv_iosched_store+0x174/0x1e0
> > > > [  106.921885]        queue_attr_store+0x142/0x180
> > > > [  106.921893]        kernfs_fop_write_iter+0x168/0x240
> > > > [  106.921902]        vfs_write+0x2b2/0x540
> > > > [  106.921910]        ksys_write+0x72/0xf0
> > > > [  106.921916]        do_syscall_64+0x95/0x180
> > > > [  106.921925]        entry_SYSCALL_64_after_hwframe+0x76/0x7e
> > > 
> > > That is another regression from commit
> > > 
> > > 	af2814149883 block: freeze the queue in queue_attr_store
> > > 
> > > and queue_wb_lat_store() has same risk too.
> > > 
> > > I will cook a patch to fix it.
> > 
> > Thanks. Are these splats going to be silenced for 6.13-rc? Like
> > having
> > the new lockdep checks under a special config until they are fixed?
> 
> It is too late for v6.13, and Christoph's fix won't be available for
> v6.13
> too.

Yeah, I was thinking more of the lockdep warnings themselves, rather
than the actual deadlock fixing? 

Thanks,
Thomas

> 
> 
> Thanks,
> Ming
> 






[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux