[PATCH 0/5] xfs; xfs_iflush_cluster vs xfs_reclaim_inode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi folks,

There is a problem that RHEL QE tripped over on a long-running
fsstress test on a RHEL 6.6. Brian did all the hard work of working
out the initial cause of the GPF that was being tripped, but
had not yet got past how to fix the issues around xfs_free_inode.
The code is the same as the current upstream code, so the problem
still exists....

The first patch fixes an obvious (now!) bug in xfs_iflush_cluster
where it checks the wrong inode after lookup for validity. It still
kind-of works, because the correct inode number is used for the "are
we still in the right cluster" check, so it's not quite a hole the
size of a truck. Still something that should not have slipped
through 6 years ago and not been discovered until now...

The most important patch (#4) address the use-after-free issues that the
xfs inode has w.r.t. RCU freeing and the lookup that
xfs_iflush_cluster is doing under the rcu_read_lock. All the
structures accessed under the RCU context need to be freed after the
current RCU grace period expires, as RCU lookups may attempt to
access them at any time during the grace period. hence we have to
move them into the RCU callback so that we don't free them
prematurely.

The rest of the patches are defensive in that they make
xfs_iflush_cluster only act on relevant inodes and be able to
guarantee detection inodes that are in the process of being freed.
While these aren't absolutely necessary, it seems silly to ignore
these obvious issues while I'm fixing up other issues with the same
code.

There's some more detail on the fixes in the commit descriptions.

Brian, I've only run this through xfstests, so I have no real idea
if it fixes the problem fsstress has uncovered. AIUI it takes 3 or 4
days to reproduce the issue so this is kind of a pre-emptive strike
on what I think is the underlying issue based on your description
and commentary. I figured having code to explain the problems would
save some time while you sleep....

Comments, thoughts, testing and flames all welcome...

-Dave.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux