On Wed, 3 Feb 2016, Michal Hocko wrote: > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > index 9a0e4e5f50b4..840e03986497 100644 > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -443,13 +443,6 @@ static bool __oom_reap_vmas(struct mm_struct *mm) > continue; > > /* > - * mlocked VMAs require explicit munlocking before unmap. > - * Let's keep it simple here and skip such VMAs. > - */ > - if (vma->vm_flags & VM_LOCKED) > - continue; > - > - /* > * Only anonymous pages have a good chance to be dropped > * without additional steps which we cannot afford as we > * are OOM already. > @@ -459,9 +452,12 @@ static bool __oom_reap_vmas(struct mm_struct *mm) > * we do not want to block exit_mmap by keeping mm ref > * count elevated without a good reason. > */ > - if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) > + if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { > + if (vma->vm_flags & VM_LOCKED) > + munlock_vma_pages_all(vma); > unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end, > &details); > + } > } > tlb_finish_mmu(&tlb, 0, -1); > up_read(&mm->mmap_sem); Are we concerned about munlock_vma_pages_all() taking lock_page() and perhaps stalling forever, the same way it would stall in exit_mmap() for VM_LOCKED vmas, if another thread has locked the same page and is doing an allocation? I'm wondering if in that case it would be better to do a best-effort munlock_vma_pages_all() with trylock_page() and just give up on releasing memory from that particular vma. In that case, there may be other memory that can be freed with unmap_page_range() that would handle this livelock. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>