On Wed, Jun 2, 2021 at 2:11 AM yulei zhang <yulei.kernel@xxxxxxxxx> wrote: > > On Tue, Jun 1, 2021 at 10:45 PM Chris Down <chris@xxxxxxxxxxxxxx> wrote: > > > > yulei zhang writes: > > >Yep, dynamically adjust the memory.high limits can ease the memory pressure > > >and postpone the global reclaim, but it can easily trigger the oom in > > >the cgroups, > > > > To go further on Shakeel's point, which I agree with, memory.high should > > _never_ result in memcg OOM. Even if the limit is breached dramatically, we > > don't OOM the cgroup. If you have a demonstration of memory.high resulting in > > cgroup-level OOM kills in recent kernels, then that needs to be provided. :-) > > You are right, I mistook it for max. Shakeel means the throttling > during context switch > which uses memory.high as threshold to calculate the sleep time. > Currently it only applies > to cgroupv2. In this patchset we explore another idea to throttle the > memory usage, which > rely on setting an average allocation speed in memcg. We hope to > suppress the memory > usage in low priority cgroups when it reaches the system watermark and > still keep the activities > alive. I think you need to make the case: why should we add one more form of throttling? Basically why memory.high is not good for your use-case and the proposed solution works better. Though IMO it would be a hard sell.