On Wed, Nov 30, 2022 at 03:01:58PM +0800, chengkaitao wrote: > From: chengkaitao <pilgrimtao@xxxxxxxxx> > > We created a new interface <memory.oom.protect> for memory, If there is > the OOM killer under parent memory cgroup, and the memory usage of a > child cgroup is within its effective oom.protect boundary, the cgroup's > tasks won't be OOM killed unless there is no unprotected tasks in other > children cgroups. It draws on the logic of <memory.min/low> in the > inheritance relationship. > > It has the following advantages, > 1. We have the ability to protect more important processes, when there > is a memcg's OOM killer. The oom.protect only takes effect local memcg, > and does not affect the OOM killer of the host. > 2. Historically, we can often use oom_score_adj to control a group of > processes, It requires that all processes in the cgroup must have a > common parent processes, we have to set the common parent process's > oom_score_adj, before it forks all children processes. So that it is > very difficult to apply it in other situations. Now oom.protect has no > such restrictions, we can protect a cgroup of processes more easily. The > cgroup can keep some memory, even if the OOM killer has to be called. It reminds me our attempts to provide a more sophisticated cgroup-aware oom killer. The problem is that the decision which process(es) to kill or preserve is individual to a specific workload (and can be even time-dependent for a given workload). So it's really hard to come up with an in-kernel mechanism which is at the same time flexible enough to work for the majority of users and reliable enough to serve as the last oom resort measure (which is the basic goal of the kernel oom killer). Previously the consensus was to keep the in-kernel oom killer dumb and reliable and implement complex policies in userspace (e.g. systemd-oomd etc). Is there a reason why such approach can't work in your case? Thanks!