Theoretically it is possible that an mm_struct with 60000+ vmas loops with potentially allocating memory, with mm->mmap_sem held for write by the current thread. Unless I overlooked that fatal_signal_pending() is somewhere in the loop, this is bad if current thread was selected as an OOM victim, for the current thread will continue allocations using memory reserves while the OOM reaper is unable to reclaim memory. But there is no point with continuing the loop from the beginning if current thread is killed. If there were __GFP_KILLABLE (or something like memalloc_nofs_save()/memalloc_nofs_restore()), we could apply it to all allocations inside the loop. But since we don't have such flag, this patch uses fatal_signal_pending() check inside the loop. Signed-off-by: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx> Cc: Alexander Viro <viro@xxxxxxxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> --- kernel/fork.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/kernel/fork.c b/kernel/fork.c index 1e8c9a7..38d5baa 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -440,6 +440,10 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm, continue; } charge = 0; + if (fatal_signal_pending(current)) { + retval = -EINTR; + goto out; + } if (mpnt->vm_flags & VM_ACCOUNT) { unsigned long len = vma_pages(mpnt); -- 1.8.3.1