This reverts the patch "hugetlb: avoid taking i_mmap_mutex in unmap_single_vma() for hugetlb" from mmotm. This patch is possibly a mistake and blocks the merging of a hugetlb fix where page tables can get corrupted (https://lkml.org/lkml/2012/7/24/93). The motivation of the patch appears to be two-fold. First, it believes that the i_mmap_mutex is to protect against list corruption of the page->lru lock but that is not quite accurate. The i_mmap_mutex for shared page tables is meant to protect against races when sharing and unsharing the page tables. For example, an important use of i_mmap_mutex is to stabilise the page_count of the PMD page during huge_pmd_unshare. Second, it is protecting against a potential deadlock when unmap_unsingle_page is called from unmap_mapping_range(). However, hugetlbfs should never be in this path. It has its own setattr and truncate handlers where are the paths that use unmap_mapping_range(). Unless Aneesh has another reason for the patch, it should be reverted to preserve hugetlb page sharing locking. Signed-off-by: Mel Gorman <mgorman@xxxxxxx> --- mm/memory.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index 8a989f1..22bc695 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1344,8 +1344,11 @@ static void unmap_single_vma(struct mmu_gather *tlb, * Since no pte has actually been setup, it is * safe to do nothing in this case. */ - if (vma->vm_file) + if (vma->vm_file) { + mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex); __unmap_hugepage_range(tlb, vma, start, end, NULL); + mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex); + } } else unmap_page_range(tlb, vma, start, end, details); } -- 1.7.9.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>