On Fri 21-07-17 06:47:11, Tetsuo Handa wrote: > Michal Hocko wrote: > > On Wed 19-07-17 05:51:03, Tetsuo Handa wrote: > > > Michal Hocko wrote: > > > > On Tue 18-07-17 23:06:50, Tetsuo Handa wrote: > > > > > Commit e2fe14564d3316d1 ("oom_reaper: close race with exiting task") > > > > > guarded whole OOM reaping operations using oom_lock. But there was no > > > > > need to guard whole operations. We needed to guard only setting of > > > > > MMF_OOM_REAPED flag because get_page_from_freelist() in > > > > > __alloc_pages_may_oom() is called with oom_lock held. > > > > > > > > > > If we change to guard only setting of MMF_OOM_SKIP flag, the OOM reaper > > > > > can start reaping operations as soon as wake_oom_reaper() is called. > > > > > But since setting of MMF_OOM_SKIP flag at __mmput() is not guarded with > > > > > oom_lock, guarding only the OOM reaper side is not sufficient. > > > > > > > > > > If we change the OOM killer side to ignore MMF_OOM_SKIP flag once, > > > > > there is no need to guard setting of MMF_OOM_SKIP flag, and we can > > > > > guarantee a chance to call get_page_from_freelist() in > > > > > __alloc_pages_may_oom() without depending on oom_lock serialization. > > > > > > > > > > This patch makes MMF_OOM_SKIP act as if MMF_OOM_REAPED, and adds a new > > > > > flag which acts as if MMF_OOM_SKIP, in order to close both race window > > > > > (the OOM reaper side and __mmput() side) without using oom_lock. > > > > > > > > Why do we need this patch when > > > > http://lkml.kernel.org/r/20170626130346.26314-1-mhocko@xxxxxxxxxx > > > > already removes the lock and solves another problem at once? > > > > > > We haven't got an answer from Hugh and/or Andrea whether that patch is safe. > > > > So what? I haven't see anybody disputing the correctness. And to be > > honest I really dislike your patch. Yet another round kind of solutions > > are just very ugly hacks usually because they are highly timing > > sensitive. > > Yes, OOM killer is highly timing sensitive. > > > > > > Even if that patch is safe, this patch still helps with CONFIG_MMU=n case. > > > > Could you explain how? > > Nothing prevents sequence below. > > Process-1 Process-2 > > Takes oom_lock. > Fails get_page_from_freelist(). > Enters out_of_memory(). > Gets SIGKILL. > Gets TIF_MEMDIE. > Leaves out_of_memory(). > Releases oom_lock. > Enters do_exit(). > Calls __mmput(). > Takes oom_lock. > Fails get_page_from_freelist(). > Releases some memory. > Sets MMF_OOM_SKIP. > Enters out_of_memory(). > Selects next victim because there is no !MMF_OOM_SKIP mm. > Sends SIGKILL needlessly. > > If we ignore MMF_OOM_SKIP once, we can avoid sequence above. But we set MMF_OOM_SKIP _after_ the process lost its address space (well after the patch which allows to race oom reaper with the exit_mmap). > > Process-1 Process-2 > > Takes oom_lock. > Fails get_page_from_freelist(). > Enters out_of_memory(). > Get SIGKILL. > Get TIF_MEMDIE. > Leaves out_of_memory(). > Releases oom_lock. > Enters do_exit(). > Calls __mmput(). > Takes oom_lock. > Fails get_page_from_freelist(). > Releases some memory. > Sets MMF_OOM_SKIP. > Enters out_of_memory(). > Ignores MMF_OOM_SKIP mm once. > Leaves out_of_memory(). > Releases oom_lock. > Succeeds get_page_from_freelist(). OK, so let's say you have another task just about to jump into out_of_memory and ... end up in the same situation. This race is just unavoidable. > Strictly speaking, this patch is independent with OOM reaper. > This patch increases possibility of succeeding get_page_from_freelist() > without sending SIGKILL. Your patch is trying to drop it silently. > > Serializing setting of MMF_OOM_SKIP with oom_lock is one approach, > and ignoring MMF_OOM_SKIP once without oom_lock is another approach. Or simply making sure that we only set the flag _after_ the address space is gone, which is what I am proposing. -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>