On Tue, Dec 15, 2020 at 01:22:33PM +1100, Dave Chinner wrote: > On Mon, Dec 14, 2020 at 02:37:18PM -0800, Yang Shi wrote: > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > may suffer from over shrink, excessive reclaim latency, etc. > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > We observed this hit in our production environment which was running vfs heavy workload > > shown as the below tracing log: > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > cache items 246404277 delta 31345 total_scan 123202138 > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > last shrinker return val 123186855 > > > > The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > better isolation. > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > Signed-off-by: Yang Shi <shy828301@xxxxxxxxx> > > --- > > include/linux/memcontrol.h | 9 +++ > > mm/memcontrol.c | 110 ++++++++++++++++++++++++++++++++++++- > > mm/vmscan.c | 4 ++ > > 3 files changed, 120 insertions(+), 3 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index 922a7f600465..1b343b268359 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -92,6 +92,13 @@ struct lruvec_stat { > > long count[NR_VM_NODE_STAT_ITEMS]; > > }; > > > > + > > +/* Shrinker::id indexed nr_deferred of memcg-aware shrinkers. */ > > +struct memcg_shrinker_deferred { > > + struct rcu_head rcu; > > + atomic_long_t nr_deferred[]; > > +}; > > So you're effectively copy and pasting the memcg_shrinker_map > infrastructure and doubling the number of allocations/frees required > to set up/tear down a memcg? Why not add it to the struct > memcg_shrinker_map like this: > > struct memcg_shrinker_map { > struct rcu_head rcu; > unsigned long *map; > atomic_long_t *nr_deferred; > }; > > And when you dynamically allocate the structure, set the map and > nr_deferred pointers to the correct offset in the allocated range. > > Then this patch is really only changes to the size of the chunk > being allocated, setting up the pointers and copying the relevant > data from the old to new. Fully agreed. In the longer-term, it may be nice to further expand this and make this the generalized intersection between cgroup, node and shrinkers. There is large overlap with list_lru e.g. - with data of identical scope and lifetime, but duplicative callbacks and management. If we folded list_lru_memcg into the above data structure, we could also generalize and reuse the existing callbacks.