> -----邮件原件----- > 发件人: Michal Hocko [mailto:mhocko@xxxxxxxxxx] > 发送时间: 2018年3月23日 18:34 > 收件人: Li,Rongqing <lirongqing@xxxxxxxxx> > 抄送: linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; > cgroups@xxxxxxxxxxxxxxx; hannes@xxxxxxxxxxx; Andrey Ryabinin > <aryabinin@xxxxxxxxxxxxx> > 主题: Re: [PATCH] mm/memcontrol.c: speed up to force empty a memory > cgroup > > On Mon 19-03-18 16:29:30, Li RongQing wrote: > > mem_cgroup_force_empty() tries to free only 32 (SWAP_CLUSTER_MAX) > > pages on each iteration, if a memory cgroup has lots of page cache, it > > will take many iterations to empty all page cache, so increase the > > reclaimed number per iteration to speed it up. same as in > > mem_cgroup_resize_limit() > > > > a simple test show: > > > > $dd if=aaa of=bbb bs=1k count=3886080 > > $rm -f bbb > > $time echo 100000000 >/cgroup/memory/test/memory.limit_in_bytes > > > > Before: 0m0.252s ===> after: 0m0.178s > > One more note. I have only now realized that increasing the patch size might > have another negative side effect. Memcg reclaim bails out early when the > required target has been reclaimed and so we might skip memcgs in the > hierarchy and could end up hamering one child in the hierarchy much more > than others. Our current code is not ideal and we workaround this by a > smaller target and caching the last reclaimed memcg so the imbalance is not > so visible at least. > > This is not something that couldn't be fixed and maybe 1M chunk would be > acceptable as well. I dunno. Let's focus on the main bottleneck first before we > start doing these changes though. Can we select chunk size based on resize memory? Chunksize = (memory.usage_in_bytes- memory.limit_in_bytes)/1024 Chunksize=max(Chunksize, 1M) -RongQing > -- > Michal Hocko > SUSE Labs