On Thu, 9 May 2019 16:07:49 +0800 Yafang Shao <laoar.shao@xxxxxxxxx> wrote: > In the node reclaim, may_shrinkslab is 0 by default, > hence shrink_slab will never be performed in it. > While shrik_slab should be performed if the relcaimable slab is over > min slab limit. > > This issue is very easy to produce, first you continuously cat a random > non-exist file to produce more and more dentry, then you read big file > to produce page cache. And finally you will find that the denty will > never be shrunk. It does sound like an oversight. > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -4141,6 +4141,8 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in > .may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP), > .may_swap = 1, > .reclaim_idx = gfp_zone(gfp_mask), > + .may_shrinkslab = node_page_state(pgdat, NR_SLAB_RECLAIMABLE) > > + pgdat->min_slab_pages, > }; > > trace_mm_vmscan_node_reclaim_begin(pgdat->node_id, order, > @@ -4158,15 +4160,13 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in > reclaim_state.reclaimed_slab = 0; > p->reclaim_state = &reclaim_state; > > - if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages) { Would it be better to do if (node_pagecache_reclaimable(pgdat) > pgdat->min_unmapped_pages || sc.may_shrinkslab) { > /* > * Free memory by calling shrink node with increasing > * priorities until we have enough memory freed. > */ The above will want re-indenting and re-right-justifying. > - do { > - shrink_node(pgdat, &sc); > - } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0); > - } > + do { > + shrink_node(pgdat, &sc); > + } while (sc.nr_reclaimed < nr_pages && --sc.priority >= 0); Won't this cause pagecache reclaim and compaction which previously did not occur? If yes, what are the effects of this and are they desirable? If no, perhaps call shrink_slab() directly in this case. Or something like that. It's unclear why min_unmapped_pages (min_unmapped_ratio) exists. Is it a batch-things-up efficiency thing?