Re: [PATCH] xfs: pin inodes that would otherwise overflow link count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 11, 2023 at 02:41:05PM -0700, Darrick J. Wong wrote:
> On Thu, Oct 12, 2023 at 08:08:20AM +1100, Dave Chinner wrote:
> > On Wed, Oct 11, 2023 at 01:33:50PM -0700, Darrick J. Wong wrote:
> > > From: Darrick J. Wong <djwong@xxxxxxxxxx>
> > > 
> > > The VFS inc_nlink function does not explicitly check for integer
> > > overflows in the i_nlink field.  Instead, it checks the link count
> > > against s_max_links in the vfs_{link,create,rename} functions.  XFS
> > > sets the maximum link count to 2.1 billion, so integer overflows should
> > > not be a problem.
> > > 
> > > However.  It's possible that online repair could find that a file has
> > > more than four billion links, particularly if the link count got
> > 
> > I don't think we should be attempting to fix that online - if we've
> > really found an inode with 4 billion links then something else has
> > gone wrong during repair because we shouldn't get there in the first
> > place.
> 
> I don't agree -- if online repair really does find 3 billion dirents
> pointing to a (very) hardlinked file, then it should set the link count
> to 3 billion.  The VFS will not let userspace add more hardlinks, but it
> will let userspace remove hardlinks.

Yet that leaves the inode with a corruption link count according to
the on disk format definition. We know stuff is going to go wrong
when the user starts trying to remove hardlinks, which will result
in having repair run again to (eventually) remove the PINNED value.

The situation is no different to having 5 billion hard links -
that's as invalid as having 3 billion hard links - but we are using
language lawyering to split the fine hair that is "corrupt but
technically still usable" and "corrupt and unrecoverable".

Both situations are less than ideal, and if we solve the 5 billion
hardlink case, there is no reason at all for letting this whacky
"treat unlinked as negative until it pins" overflow case exist at
all.

> > Regardless, I don't think fixing nlink overflow cases should be done
> > online. A couple of billion links to a single inode takes a
> > *long* time to create and even longer to validate (and take a -lot-
> > of memory).
> 
> xfs_repair will burn a lot of memory and time doing that; xfs_scrub will
> take just as much time but only O(icount) memory.

Yup, I've seen repair take *3 weeks* and 250GB of RAM to validate
production filesystems with a couple of billion hard links and
individual inode link counts in the ~100million range. We're talking
about an order of magnitude higher link counts here - the validation
runtime alone is massively problematic, let alone deciding what
should be done with overflows. Fixing this requires human
intervention to decide what to do...

> > Hence I think we should just punt "more than 2.1
> > billion links" to the offline repair, because online repair can't do
> > anything to actually find whatever caused the overflow in the
> > first place, nor can it actually fix it - it should never have
> > happened in the first place....
> 
> I don't think deleting dirents to reduce link count is a good idea,
> since the repair tool will have no idea which directory links are more
> deletable than others.

I never said we should delete directory links. I said we should punt
it to an admin to decide how to fix (i.e. to an offline repair
context).

> If repair finds XFS_MAXLINKS < nr_dirents < -1U, then I think we should
> reset the link count and let userspace decide if they're going to unlink
> the file to reduce the link count.  That's already what xfs_repair does,
> and xfs_scrub follows that behavior.
> 
> For nr_dirents > -1U, online repair just skips the file and reports that
> repairs didn't succeed.  xfs_repair overflows the u32 and won't notice
> that it's now set the link count to something suspiciously low.

Fixing this requires human intervention to decide what to do. If
it's a hardlink backup farm (the cases I've seen with this scale of
link counts) then the trivial solution is to duplicate the inode
everything is linked to and then move all the overflows to the
duplicate(s).

repair *could* do this automatically by duplicating the source inode
into lost+found and redirecting overflowed dirents to them. It
doesn't matter where the duplicated inodes live - there will be
directories pointing to them from all over the place. Doing this
will not result in any loss of data or directory entries - it just
means that the "shared" inode is not a single unique inode anymore.

If we solve the larger overflow problem this way, then the
XFS_MAXLINKS < nr_dirents < -1U case can also use this solution and
we no longer need to support a whacky "corrupt but technically still
usable" corner case....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux