Re: Multiple oom_reaper BUGs: unmap_page_range racing with exit_mmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



David Rientjes wrote:
> On Tue, 5 Dec 2017, David Rientjes wrote:
> 
> > One way to solve the issue is to have two mm flags: one to indicate the mm 
> > is entering unmap_vmas(): set the flag, do down_write(&mm->mmap_sem); 
> > up_write(&mm->mmap_sem), then unmap_vmas().  The oom reaper needs this 
> > flag clear, not MMF_OOM_SKIP, while holding down_read(&mm->mmap_sem) to be 
> > allowed to call unmap_page_range().  The oom killer will still defer 
> > selecting this victim for MMF_OOM_SKIP after unmap_vmas() returns.
> > 
> > The result of that change would be that we do not oom reap from any mm 
> > entering unmap_vmas(): we let unmap_vmas() do the work itself and avoid 
> > racing with it.
> > 
> 
> I think we need something like the following?

This patch does not work. __oom_reap_task_mm() can find MMF_REAPING and
return true and sets MMF_OOM_SKIP before exit_mmap() calls down_write().

Also, I don't know what exit_mmap() is doing but I think that there is a
possibility that the OOM reaper tries to reclaim mlocked pages as soon as
exit_mmap() cleared VM_LOCKED flag by calling munlock_vma_pages_all().

	if (mm->locked_vm) {
		vma = mm->mmap;
		while (vma) {
			if (vma->vm_flags & VM_LOCKED)
				munlock_vma_pages_all(vma);
			vma = vma->vm_next;
		}
	}

/*
 * munlock_vma_pages_range() - munlock all pages in the vma range.'
 * @vma - vma containing range to be munlock()ed.
 * @start - start address in @vma of the range
 * @end - end of range in @vma.
 *
 *  For mremap(), munmap() and exit().
 *
 * Called with @vma VM_LOCKED.
 *
 * Returns with VM_LOCKED cleared.  Callers must be prepared to
 * deal with this.
 *
 * We don't save and restore VM_LOCKED here because pages are
 * still on lru.  In unmap path, pages might be scanned by reclaim
 * and re-mlocked by try_to_{munlock|unmap} before we unmap and
 * free them.  This will result in freeing mlocked pages.
 */
void munlock_vma_pages_range(struct vm_area_struct *vma,
                             unsigned long start, unsigned long end)
{
	vma->vm_flags &= VM_LOCKED_CLEAR_MASK;

	while (start < end) {
		/*
		 * Things for munlock() are done here. But at this point,
		 * __oom_reap_task_mm() can call unmap_page_range() because
		 * can_madv_dontneed_vma() returns true due to VM_LOCKED
		 * being already cleared and MMF_OOM_SKIP is not yet set.
		 */
	}
}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux