Re: XFS reclaim lock order bug

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2010-11-25 at 14:48 +1100, Nick Piggin wrote:
> On Wed, Nov 24, 2010 at 03:03:41PM -0500, Christoph Hellwig wrote:
> > On Wed, Nov 24, 2010 at 08:12:58AM +1100, Dave Chinner wrote:
> > > It is supposed to be handled by the re-initialisation of the
> > > ip->i_iolock in ->evict_inode (xfs_fs_evict_inode). An inode found
> > > in the reclaim state must have passed through this reinitialisation,
> > > so from a lockdep perspective the iolock in the vfs path is a
> > > different context to the iolock in the reclaim path. That fixed all
> > > the non-reclaim state related lockdep false positives, so Perhaps
> > > there is an issue with the lockdep reclaim state checking that does
> > > not interact well with re-initialised lock contexts?
> > 
> > I've been looking through this again, and I think it's indeed not
> > enough.  We don't just need to re-initialize it, but also set a
> > different lockclass for it.
> 
> Doesn't init_rwsem give it a new class?

Per call-site, yes it should.

> Guys, can you take a quick look at the code Dave is referring to
> (xfs_fs_evict_inode), and check that it actually does what he
> intends?

Right, so this is trying to set a different class from the regular init
site, which (/me applies grep) lives in xfs_inode_alloc(), right?

Ought to work.. assuming the inode will be fully destroyed and new
inodes are always obtained through xfs_inode_alloc() and not reused.

> We're getting what seems to be false positives in reclaim inversion
> of lockings. Backtraces here
> http://oss.sgi.com/pipermail/xfs/2010-November/048092.html

Right, so there its holding the inode in the read path while taking a
page-fault which does an allocation.

vs

acquiring the inode in the xfs_reclaim_node_shrink() path.


Presumably the whole xfs_fs_evict_inode() stuff will happen _after_ its
possible to end up in that read path?


Something like the below would give the lock-class an explicit name,
because both sites now use the exact same init thing they're called:

  (&(&ip->i_iolock)->mr_lock)
  (&(&ip->i_iolock)->mr_lock#2)

Which is hard to tell apart, but I suspect #2 is the dead one, since
they get numbered in order of appearance and its hard to have a dead
inode before having a life one ;-)

In that case though, it would suggest the inode got re-used instead of
destroyed and re-created using xfs_alloc_inode(), is that at all
possible?

---
 fs/xfs/linux-2.6/xfs_super.c |    4 ++++
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/fs/xfs/linux-2.6/xfs_super.c b/fs/xfs/linux-2.6/xfs_super.c
index 064f964..721c1c5 100644
--- a/fs/xfs/linux-2.6/xfs_super.c
+++ b/fs/xfs/linux-2.6/xfs_super.c
@@ -1091,6 +1091,8 @@ xfs_fs_write_inode(
 	return -error;
 }
 
+static struct lock_class_key xfs_dead_inode;
+
 STATIC void
 xfs_fs_evict_inode(
 	struct inode		*inode)
@@ -1118,6 +1120,8 @@ xfs_fs_evict_inode(
 	 */
 	ASSERT(!rwsem_is_locked(&ip->i_iolock.mr_lock));
 	mrlock_init(&ip->i_iolock, MRLOCK_BARRIER, "xfsio", ip->i_ino);
+	lockdep_set_class_and_name(&ip->i_iolock->mr_lock, &xfs_dead_inode, 
+			"xfd_dead_inode");
 
 	xfs_inactive(ip);
 }

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux