Re: [PATCH 18/24] xfs: reduce kswapd blocking on inode locking.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 06, 2019 at 02:22:13PM -0400, Brian Foster wrote:
> On Thu, Aug 01, 2019 at 12:17:46PM +1000, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > When doing async node reclaiming, we grab a batch of inodes that we
> > are likely able to reclaim and ignore those that are already
> > flushing. However, when we actually go to reclaim them, the first
> > thing we do is lock the inode. If we are racing with something
> > else reclaiming the inode or flushing it because it is dirty,
> > we block on the inode lock. Hence we can still block kswapd here.
> > 
> > Further, if we flush an inode, we also cluster all the other dirty
> > inodes in that cluster into the same IO, flush locking them all.
> > However, if the workload is operating on sequential inodes (e.g.
> > created by a tarball extraction) most of these inodes will be
> > sequntial in the cache and so in the same batch
> > we've already grabbed for reclaim scanning.
> > 
> > As a result, it is common for all the inodes in the batch to be
> > dirty and it is common for the first inode flushed to also flush all
> > the inodes in the reclaim batch. In which case, they are now all
> > going to be flush locked and we do not want to block on them.
> > 
> 
> Hmm... I think I'm missing something with this description. For dirty
> inodes that are flushed in a cluster via reclaim as described, aren't we
> already blocking on all of the flush locks by virtue of the synchronous
> I/O associated with the flush of the first dirty inode in that
> particular cluster?

Currently we end up issuing IO and waiting for it, so by the time we
get to the next inode in the cluster, it's already been cleaned and
unlocked.

However, as we go to non-blocking scanning, if we hit one
flush-locked inode in a batch, it's entirely likely that the rest of
the inodes in the batch are also flush locked, and so we should
always try to skip over them in non-blocking reclaim.

This is really just a stepping stone in the logic to the way the
LRU isolation function works - it's entirely non-blocking and full
of lock order inversions, so everything has to run under try-lock
semantics. This is essentially starting that restructuring, based on
the observation that sequential inodes are flushed in batches...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux