The patch titled Subject: mm/oom_kill: ensure MMU notifier range_end() is paired with range_start() has been added to the -mm tree. Its filename is mm-oom_kill-ensure-mmu-notifier-range_end-is-paired-with-range_start.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-oom_kill-ensure-mmu-notifier-range_end-is-paired-with-range_start.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-oom_kill-ensure-mmu-notifier-range_end-is-paired-with-range_start.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Sean Christopherson <seanjc@xxxxxxxxxx> Subject: mm/oom_kill: ensure MMU notifier range_end() is paired with range_start() Invoke the MMU notifier's .invalidate_range_end() callbacks even if one of the .invalidate_range_start() callbacks failed. If there are multiple notifiers, the notifier that did not fail may have performed actions in its ...start() that it expects to unwind via ...end(). Per the mmu_notifier_ops documentation, ...start() and ...end() must be paired. The only in-kernel usage that is fatally broken is the SGI UV GRU driver, which effectively blocks and sleeps fault handlers during ...start(), and unblocks/wakes the handlers during ...end(). But, the only users that can fail ...start() are the i915 and Nouveau drivers, which are unlikely to collide with the SGI driver. KVM is the only other user of ...end(), and while KVM also blocks fault handlers in ...start(), the fault handlers do not sleep and originate in killable ioctl() calls. So while it's possible for the i915 and Nouveau drivers to collide with KVM, the bug is benign for KVM since the process is dying and KVM's guest is about to be terminated. So, as of today, the bug is likely benign. But, that may not always be true, e.g. there is a potential use case for blocking memslot updates in KVM while an invalidation is in-progress, and failure to unblock would result in said updates being blocked indefinitely and hanging. Found by inspection. Verified by adding a second notifier in KVM that periodically returns -EAGAIN on non-blockable ranges, triggering OOM, and observing that KVM exits with an elevated notifier count. Link: https://lkml.kernel.org/r/20210310213117.1444147-1-seanjc@xxxxxxxxxx Fixes: 93065ac753e4 ("mm, oom: distinguish blockable mode for mmu notifiers") Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> Reviewed-by: Ben Gardon <bgardon@xxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: "Jérôme Glisse" <jglisse@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Dimitri Sivanich <dimitri.sivanich@xxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/oom_kill.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) --- a/mm/oom_kill.c~mm-oom_kill-ensure-mmu-notifier-range_end-is-paired-with-range_start +++ a/mm/oom_kill.c @@ -546,12 +546,10 @@ bool __oom_reap_task_mm(struct mm_struct vma, mm, vma->vm_start, vma->vm_end); tlb_gather_mmu(&tlb, mm); - if (mmu_notifier_invalidate_range_start_nonblock(&range)) { - tlb_finish_mmu(&tlb); + if (!mmu_notifier_invalidate_range_start_nonblock(&range)) + unmap_page_range(&tlb, vma, range.start, range.end, NULL); + else ret = false; - continue; - } - unmap_page_range(&tlb, vma, range.start, range.end, NULL); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); } _ Patches currently in -mm which might be from seanjc@xxxxxxxxxx are mm-oom_kill-ensure-mmu-notifier-range_end-is-paired-with-range_start.patch