> From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > Subject: mm, hugetlb: add VM_NORESERVE check in vma_has_reserves() > > If we map the region with MAP_NORESERVE and MAP_SHARED, we can skip to > check reserve counting and eventually we cannot be ensured to allocate a > huge page in fault time. With following example code, you can easily find > this situation. > > Assume 2MB, nr_hugepages = 100 > > fd = hugetlbfs_unlinked_fd(); > if (fd < 0) > return 1; > > size = 200 * MB; > flag = MAP_SHARED; > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, fd, 0); > if (p == MAP_FAILED) { > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > return -1; > } > > size = 2 * MB; > flag = MAP_ANONYMOUS | MAP_SHARED | MAP_HUGETLB | MAP_NORESERVE; > p = mmap(NULL, size, PROT_READ|PROT_WRITE, flag, -1, 0); > if (p == MAP_FAILED) { > fprintf(stderr, "mmap() failed: %s\n", strerror(errno)); > } > p[0] = '0'; > sleep(10); > > During executing sleep(10), run 'cat /proc/meminfo' on another process. > > HugePages_Free: 99 > HugePages_Rsvd: 100 > > Number of free should be higher or equal than number of reserve, but this > aren't. This represent that non reserved shared mapping steal a reserved > page. Non reserved shared mapping should not eat into reserve space. > > If we consider VM_NORESERVE in vma_has_reserve() and return 0 which mean > that we don't have reserved pages, then we check that we have enough free > pages in dequeue_huge_page_vma(). This prevent to steal a reserved page. > > With this change, above test generate a SIGBUG which is correct, because > all free pages are reserved and non reserved shared mapping can't get a > free page. > > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> > Reviewed-by: Wanpeng Li <liwanp@xxxxxxxxxxxxxxxxxx> > Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxxxxxxx> > Acked-by: Hillf Danton <dhillf@xxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxx> yes this changelog is much better. Thanks! Ackedy-by: Michal Hocko <mhocko@xxxxxxx> > Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> > Cc: Rik van Riel <riel@xxxxxxxxxx> > Cc: Mel Gorman <mgorman@xxxxxxx> > Cc: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxxxxxxx> > Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > Cc: Hugh Dickins <hughd@xxxxxxxxxx> > Cc: Davidlohr Bueso <davidlohr.bueso@xxxxxx> > Cc: David Gibson <david@xxxxxxxxxxxxxxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > --- > > mm/hugetlb.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff -puN mm/hugetlb.c~mm-hugetlb-add-vm_noreserve-check-in-vma_has_reserves mm/hugetlb.c > --- a/mm/hugetlb.c~mm-hugetlb-add-vm_noreserve-check-in-vma_has_reserves > +++ a/mm/hugetlb.c > @@ -464,6 +464,8 @@ void reset_vma_resv_huge_pages(struct vm > /* Returns true if the VMA has associated reserve pages */ > static int vma_has_reserves(struct vm_area_struct *vma) > { > + if (vma->vm_flags & VM_NORESERVE) > + return 0; > if (vma->vm_flags & VM_MAYSHARE) > return 1; > if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) > _ > > Patches currently in -mm which might be from iamjoonsoo.kim@xxxxxxx are > > mm-hugetlb-move-up-the-code-which-check-availability-of-free-huge-page.patch > mm-hugetlb-trivial-commenting-fix.patch > mm-hugetlb-clean-up-alloc_huge_page.patch > mm-hugetlb-fix-and-clean-up-node-iteration-code-to-alloc-or-free.patch > mm-hugetlb-remove-redundant-list_empty-check-in-gather_surplus_pages.patch > mm-hugetlb-do-not-use-a-page-in-page-cache-for-cow-optimization.patch > mm-hugetlb-add-vm_noreserve-check-in-vma_has_reserves.patch > mm-hugetlb-remove-decrement_hugepage_resv_vma.patch > mm-hugetlb-decrement-reserve-count-if-vm_noreserve-alloc-page-cache.patch > mm-hugetlb-decrement-reserve-count-if-vm_noreserve-alloc-page-cache-fix.patch > -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>