On Wed, Jun 02, 2021 at 08:12:46PM -0700, Darrick J. Wong wrote: > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > When we decide to mark an inode sick, clear the DONTCACHE flag so that > the incore inode will be kept around until memory pressure forces it out > of memory. This increases the chances that the sick status will be > caught by someone compiling a health report later on. > > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx> > --- > fs/xfs/xfs_health.c | 5 +++++ > fs/xfs/xfs_icache.c | 3 ++- > 2 files changed, 7 insertions(+), 1 deletion(-) > > > diff --git a/fs/xfs/xfs_health.c b/fs/xfs/xfs_health.c > index 8e0cb05a7142..824e0b781290 100644 > --- a/fs/xfs/xfs_health.c > +++ b/fs/xfs/xfs_health.c > @@ -231,6 +231,11 @@ xfs_inode_mark_sick( > ip->i_sick |= mask; > ip->i_checked |= mask; > spin_unlock(&ip->i_flags_lock); > + > + /* Keep this inode around so we don't lose the sickness report. */ > + spin_lock(&VFS_I(ip)->i_lock); > + VFS_I(ip)->i_state &= ~I_DONTCACHE; > + spin_unlock(&VFS_I(ip)->i_lock); > } Dentries will still be reclaimed, but the VFS will at least hold on to the inode in this case. > /* Mark parts of an inode healed. */ > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c > index c3f912a9231b..0e2b6c05e604 100644 > --- a/fs/xfs/xfs_icache.c > +++ b/fs/xfs/xfs_icache.c > @@ -23,6 +23,7 @@ > #include "xfs_dquot.h" > #include "xfs_reflink.h" > #include "xfs_ialloc.h" > +#include "xfs_health.h" > > #include <linux/iversion.h> > > @@ -648,7 +649,7 @@ xfs_iget_cache_miss( > * time. > */ > iflags = XFS_INEW; > - if (flags & XFS_IGET_DONTCACHE) > + if ((flags & XFS_IGET_DONTCACHE) && xfs_inode_is_healthy(ip)) Hmmmm. xfs_inode_is_healthy() is kind of heavyweight for just checking that ip->i_sick == 0. At this point, nobody else can be accessing the inode, so we don't need masks nor a spinlock for checking the sick field. So why not: if ((flags & XFS_IGET_DONTCACHE) && !READ_ONCE(ip->i_sick)) Or maybe still use xfs_inode_is_healthy() but convert it to the simpler, lockless sick check? Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx