Re: XFS: Assertion failed: !rwsem_is_locked(&inode->i_rwsem)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 20, 2018 at 09:29:54AM +1000, Dave Chinner wrote:
> On Tue, Jun 19, 2018 at 10:44:20AM -0600, Ross Zwisler wrote:
> > On Tue, Jun 19, 2018 at 12:32:46PM +1000, Dave Chinner wrote:
> > > On Mon, Jun 18, 2018 at 08:17:46PM -0600, Ross Zwisler wrote:
> > > > During some xfstest runs on next-20180615 I hit the following with DAX +
> > > > generic/388:
> > > > 
> > > > ================================================
> > > > WARNING: lock held when returning to user space!
> > > > 4.17.0-next-20180615-00001-gf09d99951966 #2 Not tainted
> > > > ------------------------------------------------
> > > > fsstress/6598 is leaving the kernel with locks still held!
> > > > 2 locks held by fsstress/6598:
> > > >  #0: 00000000d8f89e14 (&sb->s_type->i_mutex_key#13){++++}, at: xfs_ilock+0x211/0x310
> > > >  #1: 000000005cc93137 (&(&ip->i_mmaplock)->mr_lock){++++}, at: xfs_ilock+0x1eb/0x310
> > > 
> > > What errors occurred before this? generic/388 is testing all sorts
> > > of error paths by randomly shutting down the filesystem, so it'e
> > > entirely possible that we've leaked those locks (XFS_IOLOCK and
> > > XFS_MMAPLOCK) on some rarely travelled error path. The prior errors
> > > might help identify that path.
> > 
> > Here is the full output from another reproduction:
> ....
> >  XFS (pmem0p2): DAX enabled. Warning: EXPERIMENTAL, use at your own risk
> >  XFS (pmem0p2): Mounting V5 Filesystem
> >  XFS (pmem0p2): Starting recovery (logdev: internal)
> >  XFS (pmem0p2): Ending recovery (logdev: internal)
> >  XFS (pmem0p2): xfs_imap_lookup: xfs_ialloc_read_agi() returned error -5, agno 0
> >  
> >  ================================================
> >  WARNING: lock held when returning to user space!
> >  4.17.0-next-20180615 #1 Not tainted
> >  ------------------------------------------------
> 
> Ok, nothing extra to go on there. can you get lockdep to dump the
> stack or oops so we at least know what syscall was being run when
> this is detected?

Well, I can't seem to reproduce this reliably anymore.  :(

I did reproduce it once with the debug you requested, and the syscall that wsa
being run when we found the lock imbalance was __x64_sys_ioctl, for whatever
that's worth.
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux