Re: [PATCH 04/26] xfs: Improve metadata buffer reclaim accountability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 30, 2019 at 10:25:17AM -0700, Darrick J. Wong wrote:
> On Wed, Oct 09, 2019 at 02:21:02PM +1100, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > 
> > The buffer cache shrinker frees more than just the xfs_buf slab
> > objects - it also frees the pages attached to the buffers. Make sure
> > the memory reclaim code accounts for this memory being freed
> > correctly, similar to how the inode shrinker accounts for pages
> > freed from the page cache due to mapping invalidation.
> > 
> > We also need to make sure that the mm subsystem knows these are
> > reclaimable objects. We provide the memory reclaim subsystem with a
> > a shrinker to reclaim xfs_bufs, so we should really mark the slab
> > that way.
> > 
> > We also have a lot of xfs_bufs in a busy system, spread them around
> > like we do inodes.
> > 
> > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> > ---
> >  fs/xfs/xfs_buf.c | 6 +++++-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
> > index e484f6bead53..45b470f55ad7 100644
> > --- a/fs/xfs/xfs_buf.c
> > +++ b/fs/xfs/xfs_buf.c
> > @@ -324,6 +324,9 @@ xfs_buf_free(
> >  
> >  			__free_page(page);
> >  		}
> > +		if (current->reclaim_state)
> > +			current->reclaim_state->reclaimed_slab +=
> > +							bp->b_page_count;
> 
> Hmm, ok, I see how ZONE_RECLAIM and reclaimed_slab fit together.
> 
> >  	} else if (bp->b_flags & _XBF_KMEM)
> >  		kmem_free(bp->b_addr);
> >  	_xfs_buf_free_pages(bp);
> > @@ -2064,7 +2067,8 @@ int __init
> >  xfs_buf_init(void)
> >  {
> >  	xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf",
> > -						KM_ZONE_HWALIGN, NULL);
> > +			KM_ZONE_HWALIGN | KM_ZONE_SPREAD | KM_ZONE_RECLAIM,
> 
> I guess I'm fine with ZONE_SPREAD too, insofar as it only seems to apply
> to a particular "use another node" memory policy when slab is in use.
> Was that your intent?

It's more documentation than anything - that we shouldn't be piling
these structures all on to one node because that can have severe
issues with NUMA memory reclaim algorithms. i.e. the xfs-buf
shrinker sets SHRINKER_NUMA_AWARE, so memory pressure on a single
node can reclaim all the xfs-bufs on that node without touching any
other node.

That means, for example, if we instantiate all the AG header buffers
on a single node (e.g. like we do at mount time) then memory
pressure on that one node will generate IO stalls across the entire
filesystem as other nodes doing work have to repopulate the buffer
cache for any allocation for freeing of space/inodes..

IOWs, for large NUMA systems using cpusets this cache should be
spread around all of memory, especially as it has NUMA aware
reclaim. For everyone else, it's just documentation that improper
cgroup or NUMA memory policy could cause you all sorts of problems
with this cache.

It's worth noting that SLAB_MEM_SPREAD is used almost exclusively in
filesystems for inode caches largely because, at the time (~2006),
the only reclaimable cache that could grow to any size large enough
to cause problems was the inode cache. It's been cargo-culted ever
since, whether it is needed or not (e.g. ceph).

In the case of the xfs_bufs, I've been running workloads recently
that cache several million xfs_bufs and only a handful of inodes
rather than the other way around. If we spread inodes because
caching millions on a single node can cause problems on large NUMA
machines, then we also need to spread xfs_bufs...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux