Re: [PATCH RFC 3/4] xfs: crude chunk allocation retry mechanism

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 28, 2022 at 04:45:49PM -0500, Brian Foster wrote:
> On Wed, Feb 23, 2022 at 06:00:58PM +1100, Dave Chinner wrote:
> > i.e. as long as we track whether we've allocated a new inode chunk
> > or not, we can bound the finobt search to a single retry. If we
> > allocated a new chunk before entering the finobt search, then all we
> > need is a log force because the busy inodes, by definition, are
> > XFS_ISTALE at this point and waiting for a CIL push before they can
> > be reclaimed. At this point an retry of the finobt scan will find
> > those inodes that were busy now available for allocation.
> 
> Remind me what aspect of the prospective VFS changes prevents inode
> allocation from seeing a free inode with not-yet-elapsed RCU grace
> period..? Will this delay freeing (or evicting) the inode until a grace
> period elapses from when the last reference was dropped?

It will delay it until the inode has been reclaimed, in which case
reallocating it will result in a new struct xfs_inode being
allocated from slab, and we're all good. Any lookup that finds the
old inode will see either I_WILL_FREE, I_FREEING, I_CLEAR,
XFS_NEED_INACTIVE, XFS_IRECLAIM or ip->i_ino == 0 depending on what
point the lookup occurs. Hence the lookup will be unable to take a
new reference to the unlinked inode, and the lookup retry should
then get the newly allocated xfs_inode once it has been inserted
into the radix tree...

IOWs, it gets rid of recycling altogether, and that's how we avoid
the RCU grace period issues with recycled inodes.

> > > Perhaps a simple enough short term option is to use the existing block
> > > alloc min/max range mechanisms (as mentioned on IRC). For example:
> > > 
> > > - Use the existing min/max_agbno allocation arg input values to attempt
> > >   one or two chunk allocs outside of the known range of busy inodes for
> > >   the AG (i.e., allocate blocks higher than the max busy agino or lower
> > >   than the min busy agino).
> > > - If success, then we know we've got a chunk w/o busy inodes.
> > > - If failure, fall back to the existing chunk alloc calls, take whatever
> > >   we get and retry the finobt scan (perhaps more aggressively checking
> > >   each record) hoping we got a usable new record.
> > > - If that still fails, then fall back to synchronize_rcu() as a last
> > >   resort and grab one of the previously busy inodes.
> > > 
> > > I couldn't say for sure if that would be effective enough without
> > > playing with it a bit, but that would sort of emulate an algorithm that
> > > filtered chunk block allocations with at least one busy inode without
> > > having to modify block allocation code. If it avoids an RCU sync in the
> > > majority of cases it might be effective enough as a stopgap until
> > > background freeing exists. Thoughts?
> > 
> > It might work, but I'm not a fan of how many hoops we are considering
> > jumping through to avoid getting tangled up in the RCU requirements
> > for VFS inode life cycles. I'd much prefer just being able to say
> > "all inodes busy, log force, try again" like we do with busy extent
> > limited block allocation...
> > 
> 
> Well presumably that works fine for your implementation of busy inodes,
> but not so much for the variant intended to work around the RCU inode
> reuse problem. ISTM this all just depends on the goals and plan here,
> and I'm not seeing clear enough reasoning to grok what that is. A
> summary of the options discussed so far:
> 
> - deferred inode freeing - ideal but too involved, want something sooner
>   and more simple
> - hinted non-busy chunk allocation - simple but jumping through too many
>   RCU requirement hoops
> - sync RCU and retry - most analogous to longer term busy inode
>   allocation handling (i.e. just replace rcu sync w/ log force and
>   retry), but RCU sync is too performance costly as a direct fallback
> 
> ISTM the options here range from simple and slow, to moderately simple
> and effective (TBD), to complex and ideal. So what is the goal for this
> series?

To find out if we can do something fast and effective (and without
hacks that might bite us unexepectedly) with RCU grace periods that
is less intrusive than changing inode life cycles. If it turns out
that we can change the inode life cycle faster than we can come up
with a RCU grace period based solution, then we should just go with
inode life cycle changes and forget about the interim RCU grace
period detection changes...

> My understanding to this point is that VFS changes are a ways
> out, so the first step was busy inodes == inodes with pending grace
> periods and a rework of the busy inode definition comes later with the
> associated VFS changes. That essentially means this series gets fixed up
> and posted as a mergeable component in the meantime, albeit with a
> "general direction" that is compatible with longer term changes.

It seems to me that the RCU detection code is also a ways out,
so I'm trying to keep our options open and not have us duplicate
work unnecessarily.

> However, every RCU requirement/characteristic that this series has to
> deal with is never going to be 100% perfectly aligned with the longer
> term busy inode requirements because the implementations/definitions
> will differ, particularly if the RCU handling is pushed up into the VFS.
> So if we're concerned about that, the alternative approach is to skip
> incrementally addressing the RCU inode reuse problem and just fold
> whatever bits of useful logic from here into your inode lifecycle work
> as needed and drop this as a mergeable component.

*nod*

The signs are pointing somewhat that way, but I'm still finding
the occasional unexpected gotcha in the code that I have to work
around. I think I've got them all, but until I've got it working it
pays to keep our options open....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux