The patch titled Subject: mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas has been added to the -mm tree. Its filename is mm-hugetlb-avoid-looping-to-the-same-hugepage-if-pages-and-vmas.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-hugetlb-avoid-looping-to-the-same-hugepage-if-pages-and-vmas.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-hugetlb-avoid-looping-to-the-same-hugepage-if-pages-and-vmas.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Zhigang Lu <tonnylu@xxxxxxxxxxx> Subject: mm/hugetlb: avoid looping to the same hugepage if !pages and !vmas When mmapping an existing hugetlbfs file with MAP_POPULATE, we find it is very time consuming. For example, mmapping a 128GB file takes about 50 milliseconds. Sampling with perfevent shows it spends 99% time in the same_page loop in follow_hugetlb_page(). samples: 205 of event 'cycles', Event count (approx.): 136686374 - 99.04% test_mmap_huget [kernel.kallsyms] [k] follow_hugetlb_page follow_hugetlb_page __get_user_pages __mlock_vma_pages_range __mm_populate vm_mmap_pgoff sys_mmap_pgoff sys_mmap system_call_fastpath __mmap64 follow_hugetlb_page() is called with pages=NULL and vmas=NULL, so for each hugepage, we run into the same_page loop for pages_per_huge_page() times, but doing nothing. With this change, it takes less then 1 millisecond to mmap a 128GB file in hugetlbfs. Link: http://lkml.kernel.org/r/1567581712-5992-1-git-send-email-totty.lu@xxxxxxxxx Signed-off-by: Zhigang Lu <tonnylu@xxxxxxxxxxx> Reviewed-by: Haozhong Zhang <hzhongzhang@xxxxxxxxxxx> Reviewed-by: Zongming Zhang <knightzhang@xxxxxxxxxxx> Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Acked-by: Matthew Wilcox <willy@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/hugetlb.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) --- a/mm/hugetlb.c~mm-hugetlb-avoid-looping-to-the-same-hugepage-if-pages-and-vmas +++ a/mm/hugetlb.c @@ -4412,6 +4412,21 @@ long follow_hugetlb_page(struct mm_struc break; } } + + /* + * If subpage information not requested, update counters + * and skip the same_page loop below. + */ + if (!pages && !vmas && !pfn_offset && + (vaddr + huge_page_size(h) < vma->vm_end) && + (remainder >= pages_per_huge_page(h))) { + vaddr += huge_page_size(h); + remainder -= pages_per_huge_page(h); + i += pages_per_huge_page(h); + spin_unlock(ptl); + continue; + } + same_page: if (pages) { pages[i] = mem_map_offset(page, pfn_offset); _ Patches currently in -mm which might be from tonnylu@xxxxxxxxxxx are mm-hugetlb-avoid-looping-to-the-same-hugepage-if-pages-and-vmas.patch