On Sat, 28 Dec 2024 01:52:28 -0800 Boqun Feng <boqun.feng@xxxxxxxxx> > On Fri, Dec 27, 2024 at 06:03:45PM -0800, Suren Baghdasaryan wrote: > > On Fri, Dec 27, 2024 at 4:19 PM Hillf Danton <hdanton@xxxxxxxx> wrote: > > > On Fri, 27 Dec 2024 04:59:22 -0800 > > > > > > > > CPU0 CPU1 > > > > ---- ---- > > > > lock(&po->pg_vec_lock); > > > > lock(&mm->mmap_lock); > > > > lock(&po->pg_vec_lock); > > > > lock(&vma->vm_lock->lock); > > > > > > > > *** DEADLOCK *** > > > > > > > > 2 locks held by syz.8.396/8273: > > > > #0: ffff0000d6a2cc10 (&mm->mmap_lock){++++}-{4:4}, at: mmap_write_lock_killable include/linux/mmap_lock.h:122 [inline] > > > > #0: ffff0000d6a2cc10 (&mm->mmap_lock){++++}-{4:4}, at: vm_mmap_pgoff+0x154/0x38c mm/util.c:578 > > > > #1: ffff0000d4aa2868 (&po->pg_vec_lock){+.+.}-{4:4}, at: packet_mmap+0x9c/0x4c8 net/packet/af_packet.c:4650 > > > > > > > Given &mm->mmap_lock and &po->pg_vec_lock in same locking order on both sides, > > > this deadlock report is bogus. Due to lockdep glitch? > > What do you mean by "both sides"? Note that, here is the report saying CPU0/1 in the lockdep diagram above. > the locks that are already held by the current task, and that current > task is going to acquire &vma->vm_lock->lock, so lockdep finds new > dependency: Note the current task acquires &po->pg_vec_lock after taking &mm->mmap_lock, and it is the &mm->mmap_lock (ignored by lockdep?) that makes the report look bogus. > > &po->pg_vec_lock --> &vma->vm_lock->lock > > and there will be a circular dependency because (see above) lockdep > recorded a dependency chain that: > > &vma->vm_lock->lock --> ... --> &po->pg_vec_lock