On Wed, 18 Apr 2018, Tetsuo Handa wrote: > > Fix this by reusing MMF_UNSTABLE to specify that an mm should not be > > reaped. This prevents the concurrent munlock_vma_pages_range() and > > unmap_page_range(). The oom reaper will simply not operate on an mm that > > has the bit set and leave the unmapping to exit_mmap(). > > This change assumes that munlock_vma_pages_all()/unmap_vmas()/free_pgtables() > are never blocked for memory allocation. Is that guaranteed? For example, > i_mmap_lock_write() from unmap_single_vma() from unmap_vmas() is never blocked > for memory allocation? Commit 97b1255cb27c551d ("mm,oom_reaper: check for > MMF_OOM_SKIP before complaining") was waiting for i_mmap_lock_write() from > unlink_file_vma() from free_pgtables(). Is it really guaranteed that somebody > else who is holding that lock is never waiting for memory allocation? > Commit 97b1255cb27c is referencing MMF_OOM_SKIP already being set by exit_mmap(). The only thing this patch changes is where that is done: before or after free_pgtables(). We can certainly move it to before free_pgtables() at the risk of subsequent (and eventually unnecessary) oom kills. It's not exactly the point of this patch. I have thousands of real-world examples where additional processes were oom killed while the original victim was in free_pgtables(). That's why we've moved the MMF_OOM_SKIP to after free_pgtables(). I'm not sure how likely your scenario is in the real world, but if it poses a problem then I believe it should be fixed by eventually deferring previous victims as a change to oom_evaluate_task(), not exit_mmap(). If you'd like me to fix that, please send along your test case that triggers it and I will send a patch.