Re: [PATCH 4/7] xfs: zap broken inode forks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Dec 03, 2023 at 08:39:57PM -0800, Christoph Hellwig wrote:
> On Thu, Nov 30, 2023 at 01:08:58PM -0800, Darrick J. Wong wrote:
> > So I think we can grab the inode in the same transaction as the inode
> > core repairs.  Nobody else should even be able to see that inode, so it
> > should be safe to grab i_rwsem before committing the transaction.  Even
> > if I have to use trylock in a loop to make lockdep happy.
> 
> Hmm, I though more of an inode flag that makes access to the inode
> outside of the scrubbe return -EIO.  I can also warm up to the idea of
> having all inodes that are broken in some way in lost+found..

Moving things around in the directory tree might be worse, since we'd
now have to read the parent pointer(s) from the file to remove those
directory connections and add the new ones to lost+found.

I /think/ scouring around in a zapped data fork for a directory access
will return EFSCORRUPTED anyway, though that might occur at a late
enough stage in the process that the fs goes down, which isn't
desirable.

However, once xrep_inode massages the ondisk inode into good enough
shape that iget starts working again, I could set XFS_SICK_INO_BMBTD (and
XFS_SICK_INO_DIR as appropriate) after zapping the data fork so that the
directory accesses would return EFSCORRUPTED instead of scouring around
in the zapped fork.

Once we start persisting the sick flags, the prevention will last until
scrub or someone came along to fix the inode, instead of being a purely
incore flag.  But, babysteps for now.  I'll fix this patch to set the
XFS_SICK_INO_* flags after zapping things, and the predicates to pick
them up.

--D




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux