On Wed 26-07-17 14:27:15, Roman Gushchin wrote: [...] > @@ -656,13 +658,24 @@ static void mark_oom_victim(struct task_struct *tsk) > struct mm_struct *mm = tsk->mm; > > WARN_ON(oom_killer_disabled); > - /* OOM killer might race with memcg OOM */ > - if (test_and_set_tsk_thread_flag(tsk, TIF_MEMDIE)) > + > + if (!cmpxchg(&tif_memdie_owner, NULL, current)) { > + struct task_struct *t; > + > + rcu_read_lock(); > + for_each_thread(current, t) > + set_tsk_thread_flag(t, TIF_MEMDIE); > + rcu_read_unlock(); > + } I would realy much rather see we limit the amount of memory reserves oom victims can consume rather than build on top of the current hackish approach of limiting the number of tasks because the fundamental problem is still there (a heavy multithreaded process can still deplete the reserves completely). Is there really any reason to not go with the existing patch I've pointed to the last time around? You didn't seem to have any objects back then. -- Michal Hocko SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html