oom_lock isn't needed for __oom_reap_task_mm(). If MMF_UNSTABLE is already set for the mm, we can simply back out immediately since oom reaping is already in progress (or done). Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx> --- mm/mmap.c | 2 -- mm/oom_kill.c | 6 ++++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index cd2431f46188..7f918eb725f6 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -3072,9 +3072,7 @@ void exit_mmap(struct mm_struct *mm) * to mmu_notifier_release(mm) ensures mmu notifier callbacks in * __oom_reap_task_mm() will not block. */ - mutex_lock(&oom_lock); __oom_reap_task_mm(mm); - mutex_unlock(&oom_lock); /* * Now, set MMF_UNSTABLE to avoid racing with the oom reaper. diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 0fe4087d5151..e6328cef090f 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -488,9 +488,11 @@ void __oom_reap_task_mm(struct mm_struct *mm) * Tell all users of get_user/copy_from_user etc... that the content * is no longer stable. No barriers really needed because unmapping * should imply barriers already and the reader would hit a page fault - * if it stumbled over a reaped memory. + * if it stumbled over a reaped memory. If MMF_UNSTABLE is already set, + * reaping as already occurred so nothing left to do. */ - set_bit(MMF_UNSTABLE, &mm->flags); + if (test_and_set_bit(MMF_UNSTABLE, &mm->flags)) + return; for (vma = mm->mmap ; vma; vma = vma->vm_next) { if (!can_madv_dontneed_vma(vma))