Commit 93065ac753e44438 ("mm, oom: distinguish blockable mode for mmu notifiers") added "continue;" without calling tlb_finish_mmu(). I don't know whether tlb_flush_pending imbalance causes problems other than extra cost, but at least it looks strange. More worrisome part in that patch is that I don't know whether using trylock if blockable == false at entry is really sufficient. For example, since __gnttab_unmap_refs_async() from gnttab_unmap_refs_async() from gnttab_unmap_refs_sync() from __unmap_grant_pages() from unmap_grant_pages() from unmap_if_in_range() from mn_invl_range_start() involves schedule_delayed_work() which could be blocked on memory allocation under OOM situation, wait_for_completion() from gnttab_unmap_refs_sync() might deadlock? I don't know... Signed-off-by: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx> --- mm/oom_kill.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index b5b25e4..4f431c1 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -522,6 +522,7 @@ bool __oom_reap_task_mm(struct mm_struct *mm) tlb_gather_mmu(&tlb, mm, start, end); if (mmu_notifier_invalidate_range_start_nonblock(mm, start, end)) { + tlb_finish_mmu(&tlb, start, end); ret = false; continue; } -- 1.8.3.1