From: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxxxxxxx> i_mmap_mutex lock was added in unmap_single_vma by 502717f4e ("hugetlb: fix linked list corruption in unmap_hugepage_range()") but we don't use page->lru in unmap_hugepage_range any more. Also the lock was taken higher up in the stack in some code path. That would result in deadlock. unmap_mapping_range (i_mmap_mutex) -> unmap_mapping_range_tree -> unmap_mapping_range_vma -> zap_page_range_single -> unmap_single_vma -> unmap_hugepage_range (i_mmap_mutex) For shared pagetable support for huge pages, since pagetable pages are ref counted we don't need any lock during huge_pmd_unshare. We do take i_mmap_mutex in huge_pmd_share while walking the vma_prio_tree in mapping. (39dde65c9940c97f ("shared page table for hugetlb page")). Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxxxxxxx> --- mm/memory.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 545e18a..f6bc04f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1326,11 +1326,8 @@ static void unmap_single_vma(struct mmu_gather *tlb, * Since no pte has actually been setup, it is * safe to do nothing in this case. */ - if (vma->vm_file) { - mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex); + if (vma->vm_file) __unmap_hugepage_range(tlb, vma, start, end, NULL); - mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex); - } } else unmap_page_range(tlb, vma, start, end, details); } -- 1.7.10 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>