Patch "mm/hugetlb: prepare hugetlb_follow_page_mask() for FOLL_PIN" has been added to the 6.5-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    mm/hugetlb: prepare hugetlb_follow_page_mask() for FOLL_PIN

to the 6.5-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     mm-hugetlb-prepare-hugetlb_follow_page_mask-for-foll.patch
and it can be found in the queue-6.5 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit e758d76b8656257c2b14cc6d93f3b7187f7041b9
Author: Peter Xu <peterx@xxxxxxxxxx>
Date:   Wed Jun 28 17:53:04 2023 -0400

    mm/hugetlb: prepare hugetlb_follow_page_mask() for FOLL_PIN
    
    [ Upstream commit 458568c92953dee3716234711f1a2830a35261f3 ]
    
    follow_page() doesn't use FOLL_PIN, meanwhile hugetlb seems to not be the
    target of FOLL_WRITE either.  However add the checks.
    
    Namely, either the need to CoW due to missing write bit, or proper
    unsharing on !AnonExclusive pages over R/O pins to reject the follow page.
    That brings this function closer to follow_hugetlb_page().
    
    So we don't care before, and also for now.  But we'll care if we switch
    over slow-gup to use hugetlb_follow_page_mask().  We'll also care when to
    return -EMLINK properly, as that's the gup internal api to mean "we should
    unshare".  Not really needed for follow page path, though.
    
    When at it, switching the try_grab_page() to use WARN_ON_ONCE(), to be
    clear that it just should never fail.  When error happens, instead of
    setting page==NULL, capture the errno instead.
    
    Link: https://lkml.kernel.org/r/20230628215310.73782-3-peterx@xxxxxxxxxx
    Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
    Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
    Reviewed-by: David Hildenbrand <david@xxxxxxxxxx>
    Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
    Cc: Hugh Dickins <hughd@xxxxxxxxxx>
    Cc: James Houghton <jthoughton@xxxxxxxxxx>
    Cc: Jason Gunthorpe <jgg@xxxxxxxxxx>
    Cc: John Hubbard <jhubbard@xxxxxxxxxx>
    Cc: Kirill A . Shutemov <kirill@xxxxxxxxxxxxx>
    Cc: Lorenzo Stoakes <lstoakes@xxxxxxxxx>
    Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
    Cc: Mike Rapoport (IBM) <rppt@xxxxxxxxxx>
    Cc: Vlastimil Babka <vbabka@xxxxxxx>
    Cc: Yang Shi <shy828301@xxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Stable-dep-of: 426056efe835 ("mm/hugetlb: use nth_page() in place of direct struct page manipulation")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 097b81c37597e..d231f23088a77 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6521,13 +6521,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
 	struct page *page = NULL;
 	spinlock_t *ptl;
 	pte_t *pte, entry;
-
-	/*
-	 * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via
-	 * follow_hugetlb_page().
-	 */
-	if (WARN_ON_ONCE(flags & FOLL_PIN))
-		return NULL;
+	int ret;
 
 	hugetlb_vma_lock_read(vma);
 	pte = hugetlb_walk(vma, haddr, huge_page_size(h));
@@ -6537,8 +6531,23 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
 	ptl = huge_pte_lock(h, mm, pte);
 	entry = huge_ptep_get(pte);
 	if (pte_present(entry)) {
-		page = pte_page(entry) +
-				((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+		page = pte_page(entry);
+
+		if (!huge_pte_write(entry)) {
+			if (flags & FOLL_WRITE) {
+				page = NULL;
+				goto out;
+			}
+
+			if (gup_must_unshare(vma, flags, page)) {
+				/* Tell the caller to do unsharing */
+				page = ERR_PTR(-EMLINK);
+				goto out;
+			}
+		}
+
+		page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT);
+
 		/*
 		 * Note that page may be a sub-page, and with vmemmap
 		 * optimizations the page struct may be read only.
@@ -6548,8 +6557,10 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
 		 * try_grab_page() should always be able to get the page here,
 		 * because we hold the ptl lock and have verified pte_present().
 		 */
-		if (try_grab_page(page, flags)) {
-			page = NULL;
+		ret = try_grab_page(page, flags);
+
+		if (WARN_ON_ONCE(ret)) {
+			page = ERR_PTR(ret);
 			goto out;
 		}
 	}



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux