On 2018/08/02 9:32, Roman Gushchin wrote: > For some workloads an intervention from the OOM killer > can be painful. Killing a random task can bring > the workload into an inconsistent state. > > Historically, there are two common solutions for this > problem: > 1) enabling panic_on_oom, > 2) using a userspace daemon to monitor OOMs and kill > all outstanding processes. > > Both approaches have their downsides: > rebooting on each OOM is an obvious waste of capacity, > and handling all in userspace is tricky and requires > a userspace agent, which will monitor all cgroups > for OOMs. We could start a one-time userspace agent which handles an cgroup OOM event and then terminates... > +/** > + * mem_cgroup_get_oom_group - get a memory cgroup to clean up after OOM > + * @victim: task to be killed by the OOM killer > + * @oom_domain: memcg in case of memcg OOM, NULL in case of system-wide OOM > + * > + * Returns a pointer to a memory cgroup, which has to be cleaned up > + * by killing all belonging OOM-killable tasks. > + * > + * Caller has to call mem_cgroup_put() on the returned non-NULL memcg. > + */ > +struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim, > + struct mem_cgroup *oom_domain) > +{ > + struct mem_cgroup *oom_group = NULL; > + struct mem_cgroup *memcg; > + > + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) > + return NULL; > + > + if (!oom_domain) > + oom_domain = root_mem_cgroup; > + > + rcu_read_lock(); > + > + memcg = mem_cgroup_from_task(victim); Isn't this racy? I guess that memcg of this "victim" can change to somewhere else from the one as of determining the final candidate. This "victim" might have already passed exit_mm()/cgroup_exit() from do_exit(). This "victim" might be moving to a memcg which is different from the one determining the final candidate. > + if (memcg == root_mem_cgroup) > + goto out; > + > + /* > + * Traverse the memory cgroup hierarchy from the victim task's > + * cgroup up to the OOMing cgroup (or root) to find the > + * highest-level memory cgroup with oom.group set. > + */ > + for (; memcg; memcg = parent_mem_cgroup(memcg)) { > + if (memcg->oom_group) > + oom_group = memcg; > + > + if (memcg == oom_domain) > + break; > + } > + > + if (oom_group) > + css_get(&oom_group->css); > +out: > + rcu_read_unlock(); > + > + return oom_group; > +} > @@ -974,7 +988,23 @@ static void oom_kill_process(struct oom_control *oc, const char *message) > } > read_unlock(&tasklist_lock); > > + /* > + * Do we need to kill the entire memory cgroup? > + * Or even one of the ancestor memory cgroups? > + * Check this out before killing the victim task. > + */ > + oom_group = mem_cgroup_get_oom_group(victim, oc->memcg); > + > __oom_kill_process(victim); > + > + /* > + * If necessary, kill all tasks in the selected memory cgroup. > + */ > + if (oom_group) { Isn't "killing a child process of the biggest memory hog" and "killing all processes which belongs to a memcg which the child process of the biggest memory hog belongs to" strange? The intent of selecting a child is to try to minimize lost work while the intent of oom_cgroup is to try to discard all work. If oom_cgroup is enabled, I feel that we should pr_err("%s: Kill all processes in ", message); pr_cont_cgroup_path(memcg->css.cgroup); pr_cont(" due to memory.oom.group set\n"); without pr_err("%s: Kill process %d (%s) score %u or sacrifice child\n", message, task_pid_nr(p), p->comm, points); (I mean, don't try to select a child). > + mem_cgroup_print_oom_group(oom_group); > + mem_cgroup_scan_tasks(oom_group, oom_kill_memcg_member, NULL); > + mem_cgroup_put(oom_group); > + } > } > > /*