The patch titled Subject: mm/arch: provide pud_pfn() fallback has been added to the -mm mm-unstable branch. Its filename is mm-arch-provide-pud_pfn-fallback.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-arch-provide-pud_pfn-fallback.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Peter Xu <peterx@xxxxxxxxxx> Subject: mm/arch: provide pud_pfn() fallback Date: Wed, 27 Mar 2024 11:23:24 -0400 The comment in the code explains the reasons. We took a different approach comparing to pmd_pfn() by providing a fallback function. Another option is to provide some lower level config options (compare to HUGETLB_PAGE or THP) to identify which layer an arch can support for such huge mappings. However that can be an overkill. Link: https://lkml.kernel.org/r/20240327152332.950956-6-peterx@xxxxxxxxxx Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: Mike Rapoport (IBM) <rppt@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Andrew Jones <andrew.jones@xxxxxxxxx> Cc: Aneesh Kumar K.V (IBM) <aneesh.kumar@xxxxxxxxxx> Cc: Axel Rasmussen <axelrasmussen@xxxxxxxxxx> Cc: Christophe Leroy <christophe.leroy@xxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: James Houghton <jthoughton@xxxxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> Cc: Lorenzo Stoakes <lstoakes@xxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Muchun Song <muchun.song@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Yang Shi <shy828301@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/riscv/include/asm/pgtable.h | 1 + arch/s390/include/asm/pgtable.h | 1 + arch/sparc/include/asm/pgtable_64.h | 1 + arch/x86/include/asm/pgtable.h | 1 + include/linux/pgtable.h | 10 ++++++++++ 5 files changed, 14 insertions(+) --- a/arch/riscv/include/asm/pgtable.h~mm-arch-provide-pud_pfn-fallback +++ a/arch/riscv/include/asm/pgtable.h @@ -642,6 +642,7 @@ static inline unsigned long pmd_pfn(pmd_ #define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT) +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT); --- a/arch/s390/include/asm/pgtable.h~mm-arch-provide-pud_pfn-fallback +++ a/arch/s390/include/asm/pgtable.h @@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pu return (unsigned long)__va(pud_val(pud) & origin_mask); } +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { return __pa(pud_deref(pud)) >> PAGE_SHIFT; --- a/arch/sparc/include/asm/pgtable_64.h~mm-arch-provide-pud_pfn-fallback +++ a/arch/sparc/include/asm/pgtable_64.h @@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud) return pte_val(pte) & _PAGE_PMD_HUGE; } +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { pte_t pte = __pte(pud_val(pud)); --- a/arch/x86/include/asm/pgtable.h~mm-arch-provide-pud_pfn-fallback +++ a/arch/x86/include/asm/pgtable.h @@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_ return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; } +#define pud_pfn pud_pfn static inline unsigned long pud_pfn(pud_t pud) { phys_addr_t pfn = pud_val(pud); --- a/include/linux/pgtable.h~mm-arch-provide-pud_pfn-fallback +++ a/include/linux/pgtable.h @@ -1820,6 +1820,16 @@ typedef unsigned int pgtbl_mod_mask; #endif /* + * We always define pmd_pfn for all archs as it's used in lots of generic + * code. Now it happens too for pud_pfn (and can happen for larger + * mappings too in the future; we're not there yet). Instead of defining + * it for all archs (like pmd_pfn), provide a fallback. + */ +#ifndef pud_pfn +#define pud_pfn(x) ({ BUILD_BUG(); 0; }) +#endif + +/* * Some architectures have MMUs that are configurable or selectable at boot * time. These lead to variable PTRS_PER_x. For statically allocated arrays it * helps to have a static maximum value. _ Patches currently in -mm which might be from peterx@xxxxxxxxxx are mm-hmm-process-pud-swap-entry-without-pud_huge.patch mm-gup-cache-p4d-in-follow_p4d_mask.patch mm-gup-check-p4d-presence-before-going-on.patch mm-x86-change-pxd_huge-behavior-to-exclude-swap-entries.patch mm-sparc-change-pxd_huge-behavior-to-exclude-swap-entries.patch mm-arm-use-macros-to-define-pmd-pud-helpers.patch mm-arm-redefine-pmd_huge-with-pmd_leaf.patch mm-arm64-merge-pxd_huge-and-pxd_leaf-definitions.patch mm-powerpc-redefine-pxd_huge-with-pxd_leaf.patch mm-gup-merge-pxd-huge-mapping-checks.patch mm-treewide-replace-pxd_huge-with-pxd_leaf.patch mm-treewide-remove-pxd_huge.patch mm-arm-remove-pmd_thp_or_huge.patch mm-document-pxd_leaf-api.patch selftests-mm-run_vmtestssh-fix-hugetlb-mem-size-calculation.patch mm-kconfig-config_pgtable_has_huge_leaves.patch mm-hugetlb-declare-hugetlbfs_pagecache_present-non-static.patch mm-make-hpage_pxd_-macros-even-if-thp.patch mm-introduce-vma_pgtable_walk_beginend.patch mm-arch-provide-pud_pfn-fallback.patch mm-gup-drop-folio_fast_pin_allowed-in-hugepd-processing.patch mm-gup-refactor-record_subpages-to-find-1st-small-page.patch mm-gup-handle-hugetlb-for-no_page_table.patch mm-gup-cache-pudp-in-follow_pud_mask.patch mm-gup-handle-huge-pud-for-follow_pud_mask.patch mm-gup-handle-huge-pmd-for-follow_pmd_mask.patch mm-gup-handle-hugepd-for-follow_page.patch mm-gup-handle-hugetlb-in-the-generic-follow_page_mask-code.patch