[RFC PATCH v2 19/47] hugetlb: make hugetlb_follow_page_mask HGM-enabled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The change here is very simple: do a high-granularity walk.

Signed-off-by: James Houghton <jthoughton@xxxxxxxxxx>
---
 mm/hugetlb.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d76ab32fb6d3..5783a8307a77 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6408,6 +6408,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
 	struct page *page = NULL;
 	spinlock_t *ptl;
 	pte_t *pte, entry;
+	struct hugetlb_pte hpte;
 
 	/*
 	 * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via
@@ -6429,9 +6430,22 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
 		return NULL;
 	}
 
-	ptl = huge_pte_lock(h, mm, pte);
+retry_walk:
+	hugetlb_pte_populate(&hpte, pte, huge_page_shift(h),
+			hpage_size_to_level(huge_page_size(h)));
+	hugetlb_hgm_walk(mm, vma, &hpte, address,
+			PAGE_SIZE,
+			/*stop_at_none=*/true);
+
+	ptl = hugetlb_pte_lock(mm, &hpte);
 	entry = huge_ptep_get(pte);
 	if (pte_present(entry)) {
+		if (unlikely(!hugetlb_pte_present_leaf(&hpte, entry))) {
+			/* We raced with someone splitting from under us. */
+			spin_unlock(ptl);
+			goto retry_walk;
+		}
+
 		page = pte_page(entry) +
 				((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
 		/*
-- 
2.38.0.135.g90850a2211-goog





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux