On Sat, Mar 20, 2021 at 09:40:07AM -0700, Darrick J. Wong wrote: > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > While running some fuzz tests on inode metadata, I noticed that the > filesystem health report (as provided by xfs_spaceman) failed to report > the file corruption even when spaceman was run immediately after running > xfs_scrub to detect the corruption. That isn't the intended behavior; > one ought to be able to run scrub to detect errors in the ondisk > metadata and be able to access to those reports for some time after the > scrub. > > After running the same sequence through an instrumented kernel, I > discovered the reason why -- scrub igets the file, scans it, marks it > sick, and ireleases the inode. When the VFS lets go of the incore > inode, it moves to RECLAIMABLE state. If spaceman igets the incore > inode before it moves to RECLAIM state, iget reinitializes the VFS > state, clears the sick and checked masks, and hands back the inode. At > this point, the caller has the exact same incore inode, but with all the > health state erased. > > In other words, we're erasing the incore inode's health state flags when > we've decided NOT to sever the link between the incore inode and the > ondisk inode. This is wrong, so we need to remove the lines that zero > the fields from xfs_iget_cache_hit. > > As a precaution, we add the same lines into xfs_reclaim_inode just after > we sever the link between incore and ondisk inode. Strictly speaking > this isn't necessary because once an inode has gone through reclaim it > must go through xfs_inode_alloc (which also zeroes the state) and > xfs_iget is careful to check for mismatches between the inode it pulls > out of the radix tree and the one it wants. > > Fixes: 6772c1f11206 ("xfs: track metadata health status") > Signed-off-by: Darrick J. Wong <djwong@xxxxxxxxxx> > --- > fs/xfs/xfs_icache.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c > index 595bda69b18d..5325fa28d099 100644 > --- a/fs/xfs/xfs_icache.c > +++ b/fs/xfs/xfs_icache.c > @@ -587,8 +587,6 @@ xfs_iget_cache_hit( > ip->i_flags |= XFS_INEW; > xfs_inode_clear_reclaim_tag(pag, ip->i_ino); > inode->i_state = I_NEW; > - ip->i_sick = 0; > - ip->i_checked = 0; > > spin_unlock(&ip->i_flags_lock); > spin_unlock(&pag->pag_ici_lock); > @@ -1205,6 +1203,8 @@ xfs_reclaim_inode( > spin_lock(&ip->i_flags_lock); > ip->i_flags = XFS_IRECLAIM; > ip->i_ino = 0; > + ip->i_sick = 0; > + ip->i_checked = 0; > spin_unlock(&ip->i_flags_lock); > > xfs_iunlock(ip, XFS_ILOCK_EXCL); This is only going to keep the health information around on a DONTCACHE inode for a few extra seconds. If the scrub takes longer to run than it takes for the background inode reclaimer thread to run again (every 5s by default), then the health information for that inode is still trashed by this patch and the problem still exists. I suspect that unhealthy inodes need to have IDONTCACHE cleared so that they don't get reclaimed until there is memory pressure, hence giving scrub/spaceman some time to set/report health issues. Perhaps we should not reclaim the unhealthy inodes until they've been marked as "seen".... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx