The patch titled Subject: mm/gup: handle hugepd for follow_page() has been added to the -mm mm-unstable branch. Its filename is mm-gup-handle-hugepd-for-follow_page.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-gup-handle-hugepd-for-follow_page.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Peter Xu <peterx@xxxxxxxxxx> Subject: mm/gup: handle hugepd for follow_page() Date: Thu, 21 Mar 2024 18:08:01 -0400 Hugepd is only used in PowerPC so far on 4K page size kernels where hash mmu is used. follow_page_mask() used to leverage hugetlb APIs to access hugepd entries. Teach follow_page_mask() itself on hugepd. With previous refactors on fast-gup gup_huge_pd(), most of the code can be easily leveraged. There's something not needed for follow page, for example, gup_hugepte() tries to detect pgtable entry change which will never happen with slow gup (which has the pgtable lock held), but that's not a problem to check. Since follow_page() always only fetch one page, set the end to "address + PAGE_SIZE" should suffice. We will still do the pgtable walk once for each hugetlb page by setting ctx->page_mask properly. One thing worth mentioning is that some level of pgtable's _bad() helper will report is_hugepd() entries as TRUE on Power8 hash MMUs. I think it at least applies to PUD on Power8 with 4K pgsize. It means feeding a hugepd entry to pud_bad() will report a false positive. Let's leave that for now because it can be arch-specific where I am a bit declined to touch. In this patch it's not a problem as long as hugepd is detected before any bad pgtable entries. Link: https://lkml.kernel.org/r/20240321220802.679544-12-peterx@xxxxxxxxxx Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Andrew Jones <andrew.jones@xxxxxxxxx> Cc: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxx> Cc: Axel Rasmussen <axelrasmussen@xxxxxxxxxx> Cc: Christophe Leroy <christophe.leroy@xxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: James Houghton <jthoughton@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> Cc: Lorenzo Stoakes <lstoakes@xxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Mike Rapoport (IBM) <rppt@xxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/gup.c | 73 +++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 66 insertions(+), 7 deletions(-) --- a/mm/gup.c~mm-gup-handle-hugepd-for-follow_page +++ a/mm/gup.c @@ -30,6 +30,11 @@ struct follow_page_context { unsigned int page_mask; }; +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx); + static inline void sanity_check_pinned_pages(struct page **pages, unsigned long npages) { @@ -871,6 +876,9 @@ static struct page *follow_pmd_mask(stru return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval))))) + return follow_hugepd(vma, __hugepd(pmd_val(pmdval)), + address, PMD_SHIFT, flags, ctx); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); @@ -921,6 +929,9 @@ static struct page *follow_pud_mask(stru pud = READ_ONCE(*pudp); if (!pud_present(pud)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pud_val(pud))))) + return follow_hugepd(vma, __hugepd(pud_val(pud)), + address, PUD_SHIFT, flags, ctx); if (pud_leaf(pud)) { ptl = pud_lock(mm, pudp); page = follow_huge_pud(vma, address, pudp, flags, ctx); @@ -944,10 +955,13 @@ static struct page *follow_p4d_mask(stru p4dp = p4d_offset(pgdp, address); p4d = READ_ONCE(*p4dp); - if (!p4d_present(p4d)) - return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_leaf(p4d)); - if (unlikely(p4d_bad(p4d))) + + if (unlikely(is_hugepd(__hugepd(p4d_val(p4d))))) + return follow_hugepd(vma, __hugepd(p4d_val(p4d)), + address, P4D_SHIFT, flags, ctx); + + if (!p4d_present(p4d) || p4d_bad(p4d)) return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4dp, flags, ctx); @@ -981,7 +995,7 @@ static struct page *follow_page_mask(str unsigned long address, unsigned int flags, struct follow_page_context *ctx) { - pgd_t *pgd; + pgd_t *pgd, pgdval; struct mm_struct *mm = vma->vm_mm; ctx->page_mask = 0; @@ -996,11 +1010,17 @@ static struct page *follow_page_mask(str &ctx->page_mask); pgd = pgd_offset(mm, address); + pgdval = *pgd; - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pgd_val(pgdval))))) + page = follow_hugepd(vma, __hugepd(pgd_val(pgdval)), + address, PGDIR_SHIFT, flags, ctx); + else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) + page = no_page_table(vma, flags, address); + else + page = follow_p4d_mask(vma, address, pgd, flags, ctx); - return follow_p4d_mask(vma, address, pgd, flags, ctx); + return page; } struct page *follow_page(struct vm_area_struct *vma, unsigned long address, @@ -3037,6 +3057,37 @@ static int gup_huge_pd(hugepd_t hugepd, return 1; } + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct page *page; + struct hstate *h; + spinlock_t *ptl; + int nr = 0, ret; + pte_t *ptep; + + /* Only hugetlb supports hugepd */ + if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma))) + return ERR_PTR(-EFAULT); + + h = hstate_vma(vma); + ptep = hugepte_offset(hugepd, addr, pdshift); + ptl = huge_pte_lock(h, vma->vm_mm, ptep); + ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE, + flags, &page, &nr); + spin_unlock(ptl); + + if (ret) { + WARN_ON_ONCE(nr != 1); + ctx->page_mask = (1U << huge_page_order(h)) - 1; + return page; + } + + return NULL; +} #else static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, @@ -3044,6 +3095,14 @@ static inline int gup_huge_pd(hugepd_t h { return 0; } + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} #endif /* CONFIG_ARCH_HAS_HUGEPD */ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, _ Patches currently in -mm which might be from peterx@xxxxxxxxxx are mm-memory-fix-missing-pte-marker-for-page-on-pte-zaps.patch mm-hmm-process-pud-swap-entry-without-pud_huge.patch mm-gup-cache-p4d-in-follow_p4d_mask.patch mm-gup-check-p4d-presence-before-going-on.patch mm-x86-change-pxd_huge-behavior-to-exclude-swap-entries.patch mm-sparc-change-pxd_huge-behavior-to-exclude-swap-entries.patch mm-arm-use-macros-to-define-pmd-pud-helpers.patch mm-arm-redefine-pmd_huge-with-pmd_leaf.patch mm-arm64-merge-pxd_huge-and-pxd_leaf-definitions.patch mm-powerpc-redefine-pxd_huge-with-pxd_leaf.patch mm-gup-merge-pxd-huge-mapping-checks.patch mm-treewide-replace-pxd_huge-with-pxd_leaf.patch mm-treewide-remove-pxd_huge.patch mm-arm-remove-pmd_thp_or_huge.patch mm-document-pxd_leaf-api.patch mm-kconfig-config_pgtable_has_huge_leaves.patch mm-hugetlb-declare-hugetlbfs_pagecache_present-non-static.patch mm-make-hpage_pxd_-macros-even-if-thp.patch mm-introduce-vma_pgtable_walk_beginend.patch mm-gup-drop-folio_fast_pin_allowed-in-hugepd-processing.patch mm-gup-refactor-record_subpages-to-find-1st-small-page.patch mm-gup-handle-hugetlb-for-no_page_table.patch mm-gup-cache-pudp-in-follow_pud_mask.patch mm-gup-handle-huge-pud-for-follow_pud_mask.patch mm-gup-handle-huge-pmd-for-follow_pmd_mask.patch mm-gup-handle-hugepd-for-follow_page.patch mm-gup-handle-hugetlb-in-the-generic-follow_page_mask-code.patch