On 08/30, David Rientjes wrote: > > This removes mm->oom_disable_count entirely since it's unnecessary and > currently buggy. The counter was intended to be per-process but it's > currently decremented in the exit path for each thread that exits, causing > it to underflow. > > The count was originally intended to prevent oom killing threads that > share memory with threads that cannot be killed since it doesn't lead to > future memory freeing. The counter could be fixed to represent all > threads sharing the same mm, but it's better to remove the count since: > > - it is possible that the OOM_DISABLE thread sharing memory with the > victim is waiting on that thread to exit and will actually cause > future memory freeing, and > > - there is no guarantee that a thread is disabled from oom killing just > because another thread sharing its mm is oom disabled. Great, thanks. Even _if_ (I hope not) we decide to re-introduce this counter later, I think it will be much more simple to start from the very beginning and make the correct patch. > @@ -447,6 +431,9 @@ static int oom_kill_task(struct task_struct *p, struct mem_cgroup *mem) > for_each_process(q) > if (q->mm == mm && !same_thread_group(q, p) && > !(q->flags & PF_KTHREAD)) { (I guess this is on top of -mm patch) > + if (q->signal->oom_score_adj == OOM_SCORE_ADJ_MIN) > + continue; > + Afaics, this is the only change apart from "removes mm->oom_disable_count entirely", looks reasonable to me. Oleg. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>