On Tue 19-07-16 08:40:48, Michal Hocko wrote: > On Tue 19-07-16 06:30:42, Tetsuo Handa wrote: > > Michal Hocko wrote: > > > I really do not think that this unlikely case really has to be handled > > > now. We are very likely going to move to a different model of oom victim > > > detection soon. So let's do not add new hacks. exit_oom_victim from > > > oom_kill_process just looks like sand in eyes. > > > > Then, please revert "mm, oom: hide mm which is shared with kthread or global init" > > ( http://lkml.kernel.org/r/1466426628-15074-11-git-send-email-mhocko@xxxxxxxxxx ). > > I don't like that patch because it is doing pointless find_lock_task_mm() test > > and is telling a lie because it does not guarantee that we won't hit OOM livelock. > > The above patch doesn't make the situation worse wrt livelock. I > consider it an improvement. It adds find_lock_task_mm into > oom_scan_process_thread but that can hardly be worse than just the > task->signal->oom_victims check because we can catch MMF_OOM_REAPED. If > we are mm loss, which is a less likely case, then we behave the same as > with the previous implementation. > > So I do not really see a reason to revert that patch for now. And that being said. If you strongly disagree with the wording then what about the following: " In order to help a forward progress for the OOM killer, make sure that this really rare cases will not get into the way and hide the mm from the oom killer by setting MMF_OOM_REAPED flag for it. oom_scan_process_thread will ignore any TIF_MEMDIE task if it has MMF_OOM_REAPED flag set to catch these oom victims. After this patch we should guarantee a forward progress for the OOM killer even when the selected victim is sharing memory with a kernel thread or global init as long as the victims mm is still alive. " -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>