On Thu, Dec 03, 2020 at 12:09:36PM +0100, Bruno Prémont wrote: > Hello Roman, > > Sorry for having taken so much time to reply, I've only had the > opportunity to deploy the patch on Tuesday morning for testing and > now two days later the trashing occurred again. > > > diff --git a/mm/slab.h b/mm/slab.h > > index 6cc323f1313a..ef02b841bcd8 100644 > > --- a/mm/slab.h > > +++ b/mm/slab.h > > @@ -290,7 +290,7 @@ static inline struct obj_cgroup *memcg_slab_pre_alloc_hook(struct kmem_cache *s, > > > > if (obj_cgroup_charge(objcg, flags, objects * obj_full_size(s))) { > > obj_cgroup_put(objcg); > > - return NULL; > > + return (struct obj_cgroup *)-1UL; > > } > > > > return objcg; > > @@ -501,9 +501,13 @@ static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, > > return NULL; > > > > if (memcg_kmem_enabled() && > > - ((flags & __GFP_ACCOUNT) || (s->flags & SLAB_ACCOUNT))) > > + ((flags & __GFP_ACCOUNT) || (s->flags & SLAB_ACCOUNT))) { > > *objcgp = memcg_slab_pre_alloc_hook(s, size, flags); > > > > + if (unlikely(*objcgp == (struct obj_cgroup *)-1UL)) > > + return NULL; > > + } > > + > > return s; > > } > > Seems your proposed patch didn't really help. Anyway, thank you for testing! Actually your report helped me to reveal and fix this problem, so thank you! In the meantime Yang Shi discovered a problem related slab shrinkers, which is to some extent similar to what you describe: under certain conditions large amounts of slab memory can be completely excluded from the reclaim process. Can you, please, check if his fix will solve your problem? Here is the final version: https://www.spinics.net/lists/stable/msg430601.html . > > > > Compared to initial occurrence I do now have some more details (all but > /proc/slabinfo since boot) and according to /proc/slabinfo a good deal > of reclaimable slabs seem to be dentries (and probably > xfs_inode/xfs_ifork related to them) - not sure if those are assigned > to cgroups or not-accounted and not seen as candidate for reclaim... > > xfs_buf 444908 445068 448 36 4 : tunables 0 0 0 : slabdata 12363 12363 0 > xfs_bui_item 0 0 232 35 2 : tunables 0 0 0 : slabdata 0 0 0 > xfs_bud_item 0 0 200 40 2 : tunables 0 0 0 : slabdata 0 0 0 > xfs_cui_item 0 0 456 35 4 : tunables 0 0 0 : slabdata 0 0 0 > xfs_cud_item 0 0 200 40 2 : tunables 0 0 0 : slabdata 0 0 0 > xfs_rui_item 0 0 712 46 8 : tunables 0 0 0 : slabdata 0 0 0 > xfs_rud_item 0 0 200 40 2 : tunables 0 0 0 : slabdata 0 0 0 > xfs_icr 0 156 208 39 2 : tunables 0 0 0 : slabdata 4 4 0 > xfs_ili 1223169 1535904 224 36 2 : tunables 0 0 0 : slabdata 42664 42664 0 > xfs_inode 12851565 22081140 1088 30 8 : tunables 0 0 0 : slabdata 736038 736038 0 > xfs_efi_item 0 280 456 35 4 : tunables 0 0 0 : slabdata 8 8 0 > xfs_efd_item 0 280 464 35 4 : tunables 0 0 0 : slabdata 8 8 0 > xfs_buf_item 7 216 296 27 2 : tunables 0 0 0 : slabdata 8 8 0 > xf_trans 0 224 288 28 2 : tunables 0 0 0 : slabdata 8 8 0 > xfs_ifork 12834992 46309928 72 56 1 : tunables 0 0 0 : slabdata 826963 826963 0 > xfs_da_state 0 224 512 32 4 : tunables 0 0 0 : slabdata 7 7 0 > xfs_btree_cur 0 224 256 32 2 : tunables 0 0 0 : slabdata 7 7 0 > xfs_bmap_free_item 0 230 88 46 1 : tunables 0 0 0 : slabdata 5 5 0 > xfs_log_ticket 4 296 216 37 2 : tunables 0 0 0 : slabdata 8 8 0 > fat_inode_cache 0 0 744 44 8 : tunables 0 0 0 : slabdata 0 0 0 > fat_cache 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 > mnt_cache 114 180 448 36 4 : tunables 0 0 0 : slabdata 5 5 0 > filp 6228 15582 384 42 4 : tunables 0 0 0 : slabdata 371 371 0 > inode_cache 6669 16016 608 26 4 : tunables 0 0 0 : slabdata 616 616 0 > dentry 8092159 15642504 224 36 2 : tunables 0 0 0 : slabdata 434514 434514 0 > > > > The full collected details are available at > https://faramir-fj.hosting-restena.lu/cgmon-20201203.txt > (please take a copy as that file will not stay there forever) > > A visual graph of memory evolution is available at > https://faramir-fj.hosting-restena.lu/system-memory-20201203.png > with reboot on Tuesday morning and steady increase of slabs starting > Webnesday evening correlating with start of backup until trashing > started at about 3:30 and the large drop in memory being me doing > echo 2 > /proc/sys/vm/drop_caches > which stopped the trashing as well. > > > Against what does memcg attempt reclaim when it tries to satisfy a CG's > low limit? Only against siblings or also against root or not-accounted? > How does it take into account slabs where evictable entries will cause > unevictable entries to be freed as well? Low limits are working by excluding some portions of memory from the reclaim, not by adding a memory pressure to something else. > > > > My setup, server has 64G of RAM: > > > root > > > + system { min=0, low=128M, high=8G, max=8G } > > > + base { no specific constraints } > > > + backup { min=0, low=32M, high=2G, max=2G } > > > + shell { no specific constraints } > > > + websrv { min=0, low=4G, high=32G, max=32G } > > > + website { min=0, low=16G, high=40T, max=40T } > > > + website1 { min=0, low=64M, high=2G, max=2G } > > > + website2 { min=0, low=64M, high=2G, max=2G } > > > ... > > > + remote { min=0, low=1G, high=14G, max=14G } > > > + webuser1 { min=0, low=64M, high=2G, max=2G } > > > + webuser2 { min=0, low=64M, high=2G, max=2G } > > > ... > > Also interesting is that backup which is forced into 2G > (system/backup CG) causes amount of slabs assigned to websrv CG to > increase until that CG has almost only slab entries assigned to it to > fill 16G, like file cache being reclaimed but not slab entries even if > there is almost no file cache left and tons of slabs. > What I'm also surprised is the so much memory remains completely unused > (instead of being used for file caches). > > According to the documentation if I didn't get it wrong any limits of > child CGs (e.g. webuser1...) are applied up to what their parent's > limits allow. Thus, if looking at e.g. remote -> webuser1... even if I > have 1000 webuserN they wont "reserve" 65G for themselves via > memory.low limit when their parent sets memory.low to 1G? > Or does this depend on on CG mount options (memory_recursiveprot)? It does. What you're describing is the old (!memory_recursiveprot) behavior. Thanks!