On Thu, Oct 15, 2020 at 06:21:52PM +1100, Dave Chinner wrote: > From: Dave Chinner <dchinner@xxxxxxxxxx> > > Add a list_lru scanner that runs from the memory pressure detection > to free an amount of the buffer cache that will keep the cache from > growing when there is memory pressure. > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > --- > libxfs/buftarg.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 51 insertions(+) > > diff --git a/libxfs/buftarg.c b/libxfs/buftarg.c > index 6c7142d41eb1..8332bf3341b6 100644 > --- a/libxfs/buftarg.c > +++ b/libxfs/buftarg.c > @@ -62,6 +62,19 @@ xfs_buftarg_setsize_early( > return xfs_buftarg_setsize(btp, bsize); > } > > +static void > +dispose_list( > + struct list_head *dispose) > +{ > + struct xfs_buf *bp; > + > + while (!list_empty(dispose)) { > + bp = list_first_entry(dispose, struct xfs_buf, b_lru); > + list_del_init(&bp->b_lru); > + xfs_buf_rele(bp); > + } > +} > + > /* > * Scan a chunk of the buffer cache and drop LRU reference counts. If the > * count goes to zero, dispose of the buffer. > @@ -70,6 +83,13 @@ static void > xfs_buftarg_shrink( > struct xfs_buftarg *btc) > { > + struct list_lru *lru = &btc->bt_lru; > + struct xfs_buf *bp; > + int count; > + int progress = 16384; > + int rotate = 0; > + LIST_HEAD(dispose); > + > /* > * Make the fact we are in memory reclaim externally visible. This > * allows buffer cache allocation throttling while we are trying to > @@ -79,6 +99,37 @@ xfs_buftarg_shrink( > > fprintf(stderr, "Got memory pressure event. Shrinking caches!\n"); > > + spin_lock(&lru->l_lock); > + count = lru->l_count / 50; /* 2% */ If I'm reading this correctly, we react to a memory pressure event by trying to skim 2% of the oldest disposable buffers off the buftarg LRU? And every 16384 loop iterations we'll dispose the list even if we haven't gotten our 2% yet? How did you arrive at 2%? (Also, I'm assuming that some of these stderr printfs will at some point get turned into tracepoints or dbg_printf or the like?) --D > + fprintf(stderr, "cache size before %ld/%d\n", lru->l_count, count); > + while (count-- > 0 && !list_empty(&lru->l_lru)) { > + bp = list_first_entry(&lru->l_lru, struct xfs_buf, b_lru); > + spin_lock(&bp->b_lock); > + if (!atomic_add_unless(&bp->b_lru_ref, -1, 1)) { > + atomic_set(&bp->b_lru_ref, 0); > + bp->b_state |= XFS_BSTATE_DISPOSE; > + list_move(&bp->b_lru, &dispose); > + lru->l_count--; > + } else { > + rotate++; > + list_move_tail(&bp->b_lru, &lru->l_lru); > + } > + > + spin_unlock(&bp->b_lock); > + if (--progress == 0) { > + fprintf(stderr, "Disposing! rotated %d, lru %ld\n", rotate, lru->l_count); > + spin_unlock(&lru->l_lock); > + dispose_list(&dispose); > + spin_lock(&lru->l_lock); > + progress = 16384; > + rotate = 0; > + } > + } > + spin_unlock(&lru->l_lock); > + > + dispose_list(&dispose); > + fprintf(stderr, "cache size after %ld, count remaining %d\n", lru->l_count, count); > + > /* > * Now we've free a bunch of memory, trim the heap down to release the > * freed memory back to the kernel and reduce the pressure we are > -- > 2.28.0 >