Re: [PATCH 2/2] xfs: run blockgc on freeze to avoid iget stalls after reclaim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 02, 2022 at 01:22:40PM +1100, Dave Chinner wrote:
> On Mon, Jan 24, 2022 at 11:57:12AM -0500, Brian Foster wrote:
> > On Wed, Jan 19, 2022 at 04:36:36PM -0800, Darrick J. Wong wrote:
> > > > Of course if you wanted to recycle inactive inodes or do something else
> > > > entirely, then it's probably not worth going down this path..
> > > 
> > > I'm a bit partial to /trying/ to recycle inactive inodes because (a)
> > > it's less tangling with -fsdevel for you and (b) inode scans in the
> > > online repair patchset got a little weird once xfs_iget lost the ability
> > > to recycle _NEEDS_INACTIVE inodes...
> > > 
> > > OTOH I tried to figure out how to deal with the lockless list that those
> > > inodes are put on, and I couldn't figure out how to get them off the
> > > list safely, so that might be a dead end.  If you have any ideas I'm all
> > > ears. :)
> > > 
> > 
> > So one of the things that I've been kind of unclear on about the current
> > deferred inactivation implementation is the primary goal of the percpu
> > optimization. I obviously can see the value of the percpu list in
> > general, but how much processing needs to occur in percpu context to
> > achieve the primary goal?
> > 
> > For example, I can see how a single or small multi threaded sustained
> > file removal might be batched efficiently, but what happens if said task
> > happens to bounce around many cpus?
> 
> In that case we have a scheduler problem, not a per-cpu
> infrastructure issue.
> 

Last I tested on my box, a single threaded rm -rf had executed on
something like 24 of the 80 cpus available after about ~1m of runtime.
Of course the inactivation work for an inode occurs on the cpu that the
rm task was running on when the inode was destroyed, but I don't think
there's any guarantee that kworker task will be the next task selected
on the cpu if the system is loaded with other runnable tasks (and then
whatever task is selected doesn't pollute the cache). For example, I
also noticed rm-<pidX> -> rm-<pidY> context switching on concurrent
remove tests. That is obviously not a caching issue in this case because
both tasks are still doing remove work, but behavior is less
deterministic of the target task happens to be something unrelated. Of
course, that doesn't mean the approach might not otherwise be effective
for the majority of common workloads..

> > What if a system has hundreds of
> > cpus and enough removal tasks to populate most or all of the
> > queues?
> 
> It behaves identically to before the per-cpu inactivation queues
> were added. Essentially, everything serialises and burns CPU
> spinning on the CIL push lock regardless of where the work is coming
> from. The per-cpu queues do not impact this behaviour at all, nor do
> they change the distribution of the work that needs to be done.
> 
> Even Darrick's original proposal had this same behaviour:
> 
> https://lore.kernel.org/linux-xfs/20210801234709.GD2757197@xxxxxxxxxxxxxxxxxxx/
> 

Ok..

> > Is
> > it worth having 200 percpu workqueue tasks doing block truncations and
> > inode frees to a filesystem that might have something like 16-32 AGs?
> 
> If you have a workload with 200-way concurrency that hits a
> filesystem path with 16-32 way concurrency because of per-AG
> locking (e.g. unlink needs to lock the AGI twice - once to put the
> inode on the unlinked list, then again to remove and free it),
> then you're only going to get 16-32 way concurrency from your
> workload regardless of whether you have per-cpu algorithms for parts
> of the workload.
> 
> From a workload scalability POV, we've always used the rule that the
> filesystem AG count needs to be at least 2x the concurrency
> requirement of the workload. Generally speaking, that means if you
> want the FS to scale to operations on all CPUs at once, you need to
> configure the FS with agcount=(2 * ncpus). This was one fo the first
> things I was taught about performance tuning large CPU count HPC
> machines when I first started working at SGI back in 2002, and it's
> still true today.
> 

Makes sense.

> > So I dunno, ISTM that the current implementation can be hyper efficient
> > for some workloads and perhaps unpredictable for others. As Dave already
> > alluded to, the tradeoff often times for such hyper efficient structures
> > is functional inflexibility, which is kind of what we're seeing between
> > the inability to recycle inodes wrt to this topic as well as potential
> > optimizations on the whole RCU recycling thing. The only real approach
> > that comes to mind for managing this kind of infrastructure short of
> > changing data structures is to preserve the ability to drain and quiesce
> > it regardless of filesystem state.
> > 
> > For example, I wonder if we could do something like have the percpu
> > queues amortize insertion into lock protected perag lists that can be
> > managed/processed accordingly rather than complete the entire
> > inactivation sequence in percpu context. From there, the perag lists
> > could be processed by an unbound/multithreaded workqueue that's maybe
> > more aligned with the AG count on the filesystem than cpu count on the
> > system.
> 
> That per-ag based back end processing is exactly what Darrick's
> original proposals did:
> 
> https://lore.kernel.org/linux-xfs/162758431072.332903.17159226037941080971.stgit@magnolia/
> 
> It used radix tree walks run in background kworker threads to batch
> all the inode inactivation in a given AG via radix tree walks to
> find them.
> 
> It performed poorly at scale because it destroyed the CPU affinity
> between the unlink() operation and the inactivation operation of the
> unlinked inode when the last reference to the inode is dropped and
> the inode evicted from task syscall exit context. REgardless of
> whether there is a per-cpu front end or not, the per-ag based
> processing will destroy the CPU affinity of the data set being
> processed because we cannot guarantee that the per-ag objects are
> all local to the CPU that is processing the per-ag objects....
> 

Ok. The role/significance of cpu caching was not as apparent to me when
I had last replied to this thread. The discussion around the rcu inode
lifecycle issue helped clear some of that up.

That said, why not conditionally tag and divert to a background worker
when the inodegc is disabled? That could allow NEEDS_INACTIVE inodes to
be claimed/recycled from other contexts in scenarios like when the fs is
frozen, since they won't be stuck in inaccessible and inactive percpu
queues, but otherwise preserves current behavior in normal runtime
conditions. Darrick mentioned online repair wanting to do something
similar earlier, but it's not clear to me if scrub could or would want
to disable the percpu inodegc workers in favor of a temporary/background
mode while repair is running. I'm just guessing that performance is
probably small enough of a concern in that situation that it wouldn't be
a mitigating factor. Hm?

Brian

> IOWs, all the per-cpu queues are trying to do is keep the inode
> inactivation processing local to the CPU where they are already hot
> in cache.  The per-cpu queues do not acheive this perfectly in all
> cases, but they perform far, far better than the original
> parallelised AG affinity inactivation processing that cannot
> maintain any cache affinity for the objects being processed at all.
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@xxxxxxxxxxxxx
> 




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux