The patch titled Subject: mm/arm64: support large pfn mappings has been added to the -mm mm-unstable branch. Its filename is mm-arm64-support-large-pfn-mappings.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-arm64-support-large-pfn-mappings.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Peter Xu <peterx@xxxxxxxxxx> Subject: mm/arm64: support large pfn mappings Date: Mon, 26 Aug 2024 16:43:52 -0400 Support huge pfnmaps by using bit 56 (PTE_SPECIAL) for "special" on pmds/puds. Provide the pmd/pud helpers to set/get special bit. There's one more thing missing for arm64 which is the pxx_pgprot() for pmd/pud. Add them too, which is mostly the same as the pte version by dropping the pfn field. These helpers are essential to be used in the new follow_pfnmap*() API to report valid pgprot_t results. Note that arm64 doesn't yet support huge PUD yet, but it's still straightforward to provide the pud helpers that we need altogether. Only PMD helpers will make an immediate benefit until arm64 will support huge PUDs first in general (e.g. in THPs). Link: https://lkml.kernel.org/r/20240826204353.2228736-19-peterx@xxxxxxxxxx Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Cc: Alexander Gordeev <agordeev@xxxxxxxxxxxxx> Cc: Alex Williamson <alex.williamson@xxxxxxxxxx> Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx> Cc: Borislav Petkov <bp@xxxxxxxxx> Cc: Christian Borntraeger <borntraeger@xxxxxxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Gavin Shan <gshan@xxxxxxxxxx> Cc: Gerald Schaefer <gerald.schaefer@xxxxxxxxxxxxx> Cc: Heiko Carstens <hca@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Niklas Schnelle <schnelle@xxxxxxxxxxxxx> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Sean Christopherson <seanjc@xxxxxxxxxx> Cc: Sven Schnelle <svens@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Vasily Gorbik <gor@xxxxxxxxxxxxx> Cc: Zi Yan <ziy@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable.h | 29 +++++++++++++++++++++++++++++ 2 files changed, 30 insertions(+) --- a/arch/arm64/include/asm/pgtable.h~mm-arm64-support-large-pfn-mappings +++ a/arch/arm64/include/asm/pgtable.h @@ -578,6 +578,14 @@ static inline pmd_t pmd_mkdevmap(pmd_t p return pte_pmd(set_pte_bit(pmd_pte(pmd), __pgprot(PTE_DEVMAP))); } +#ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP +#define pmd_special(pte) (!!((pmd_val(pte) & PTE_SPECIAL))) +static inline pmd_t pmd_mkspecial(pmd_t pmd) +{ + return set_pmd_bit(pmd, __pgprot(PTE_SPECIAL)); +} +#endif + #define __pmd_to_phys(pmd) __pte_to_phys(pmd_pte(pmd)) #define __phys_to_pmd_val(phys) __phys_to_pte_val(phys) #define pmd_pfn(pmd) ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT) @@ -595,6 +603,27 @@ static inline pmd_t pmd_mkdevmap(pmd_t p #define pud_pfn(pud) ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT) #define pfn_pud(pfn,prot) __pud(__phys_to_pud_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) +#ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP +#define pud_special(pte) pte_special(pud_pte(pud)) +#define pud_mkspecial(pte) pte_pud(pte_mkspecial(pud_pte(pud))) +#endif + +#define pmd_pgprot pmd_pgprot +static inline pgprot_t pmd_pgprot(pmd_t pmd) +{ + unsigned long pfn = pmd_pfn(pmd); + + return __pgprot(pmd_val(pfn_pmd(pfn, __pgprot(0))) ^ pmd_val(pmd)); +} + +#define pud_pgprot pud_pgprot +static inline pgprot_t pud_pgprot(pud_t pud) +{ + unsigned long pfn = pud_pfn(pud); + + return __pgprot(pud_val(pfn_pud(pfn, __pgprot(0))) ^ pud_val(pud)); +} + static inline void __set_pte_at(struct mm_struct *mm, unsigned long __always_unused addr, pte_t *ptep, pte_t pte, unsigned int nr) --- a/arch/arm64/Kconfig~mm-arm64-support-large-pfn-mappings +++ a/arch/arm64/Kconfig @@ -99,6 +99,7 @@ config ARM64 select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_SUPPORTS_PAGE_TABLE_CHECK select ARCH_SUPPORTS_PER_VMA_LOCK + select ARCH_SUPPORTS_HUGE_PFNMAP if TRANSPARENT_HUGEPAGE select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT select ARCH_WANT_DEFAULT_BPF_JIT _ Patches currently in -mm which might be from peterx@xxxxxxxxxx are mm-dax-dump-start-address-in-fault-handler.patch mm-mprotect-push-mmu-notifier-to-puds.patch mm-powerpc-add-missing-pud-helpers.patch mm-x86-make-pud_leaf-only-care-about-pse-bit.patch mm-x86-implement-arch_check_zapped_pud.patch mm-x86-add-missing-pud-helpers.patch mm-mprotect-fix-dax-pud-handlings.patch mm-introduce-arch_supports_huge_pfnmap-and-special-bits-to-pmd-pud.patch mm-drop-is_huge_zero_pud.patch mm-mark-special-bits-for-huge-pfn-mappings-when-inject.patch mm-allow-thp-orders-for-pfnmaps.patch mm-gup-detect-huge-pfnmap-entries-in-gup-fast.patch mm-pagewalk-check-pfnmap-for-folio_walk_start.patch mm-fork-accept-huge-pfnmap-entries.patch mm-always-define-pxx_pgprot.patch mm-new-follow_pfnmap-api.patch kvm-use-follow_pfnmap-api.patch s390-pci_mmio-use-follow_pfnmap-api.patch mm-x86-pat-use-the-new-follow_pfnmap-api.patch vfio-use-the-new-follow_pfnmap-api.patch acrn-use-the-new-follow_pfnmap-api.patch mm-access_process_vm-use-the-new-follow_pfnmap-api.patch mm-remove-follow_pte.patch mm-x86-support-large-pfn-mappings.patch mm-arm64-support-large-pfn-mappings.patch