The patch titled Subject: mm/access_process_vm: use the new follow_pfnmap API has been added to the -mm mm-unstable branch. Its filename is mm-access_process_vm-use-the-new-follow_pfnmap-api.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-access_process_vm-use-the-new-follow_pfnmap-api.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Peter Xu <peterx@xxxxxxxxxx> Subject: mm/access_process_vm: use the new follow_pfnmap API Date: Mon, 26 Aug 2024 16:43:49 -0400 Use the new API that can understand huge pfn mappings. Link: https://lkml.kernel.org/r/20240826204353.2228736-16-peterx@xxxxxxxxxx Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> Cc: Alexander Gordeev <agordeev@xxxxxxxxxxxxx> Cc: Alex Williamson <alex.williamson@xxxxxxxxxx> Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx> Cc: Borislav Petkov <bp@xxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Christian Borntraeger <borntraeger@xxxxxxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Gavin Shan <gshan@xxxxxxxxxx> Cc: Gerald Schaefer <gerald.schaefer@xxxxxxxxxxxxx> Cc: Heiko Carstens <hca@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Niklas Schnelle <schnelle@xxxxxxxxxxxxx> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Sean Christopherson <seanjc@xxxxxxxxxx> Cc: Sven Schnelle <svens@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Vasily Gorbik <gor@xxxxxxxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Cc: Zi Yan <ziy@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) --- a/mm/memory.c~mm-access_process_vm-use-the-new-follow_pfnmap-api +++ a/mm/memory.c @@ -6538,34 +6538,34 @@ int generic_access_phys(struct vm_area_s resource_size_t phys_addr; unsigned long prot = 0; void __iomem *maddr; - pte_t *ptep, pte; - spinlock_t *ptl; int offset = offset_in_page(addr); int ret = -EINVAL; + bool writable; + struct follow_pfnmap_args args = { .vma = vma, .address = addr }; retry: - if (follow_pte(vma, addr, &ptep, &ptl)) + if (follow_pfnmap_start(&args)) return -EINVAL; - pte = ptep_get(ptep); - pte_unmap_unlock(ptep, ptl); + prot = pgprot_val(args.pgprot); + phys_addr = (resource_size_t)args.pfn << PAGE_SHIFT; + writable = args.writable; + follow_pfnmap_end(&args); - prot = pgprot_val(pte_pgprot(pte)); - phys_addr = (resource_size_t)pte_pfn(pte) << PAGE_SHIFT; - - if ((write & FOLL_WRITE) && !pte_write(pte)) + if ((write & FOLL_WRITE) && !writable) return -EINVAL; maddr = ioremap_prot(phys_addr, PAGE_ALIGN(len + offset), prot); if (!maddr) return -ENOMEM; - if (follow_pte(vma, addr, &ptep, &ptl)) + if (follow_pfnmap_start(&args)) goto out_unmap; - if (!pte_same(pte, ptep_get(ptep))) { - pte_unmap_unlock(ptep, ptl); + if ((prot != pgprot_val(args.pgprot)) || + (phys_addr != (args.pfn << PAGE_SHIFT)) || + (writable != args.writable)) { + follow_pfnmap_end(&args); iounmap(maddr); - goto retry; } @@ -6574,7 +6574,7 @@ retry: else memcpy_fromio(buf, maddr + offset, len); ret = len; - pte_unmap_unlock(ptep, ptl); + follow_pfnmap_end(&args); out_unmap: iounmap(maddr); _ Patches currently in -mm which might be from peterx@xxxxxxxxxx are mm-dax-dump-start-address-in-fault-handler.patch mm-mprotect-push-mmu-notifier-to-puds.patch mm-powerpc-add-missing-pud-helpers.patch mm-x86-make-pud_leaf-only-care-about-pse-bit.patch mm-x86-implement-arch_check_zapped_pud.patch mm-x86-add-missing-pud-helpers.patch mm-mprotect-fix-dax-pud-handlings.patch mm-introduce-arch_supports_huge_pfnmap-and-special-bits-to-pmd-pud.patch mm-drop-is_huge_zero_pud.patch mm-mark-special-bits-for-huge-pfn-mappings-when-inject.patch mm-allow-thp-orders-for-pfnmaps.patch mm-gup-detect-huge-pfnmap-entries-in-gup-fast.patch mm-pagewalk-check-pfnmap-for-folio_walk_start.patch mm-fork-accept-huge-pfnmap-entries.patch mm-always-define-pxx_pgprot.patch mm-new-follow_pfnmap-api.patch kvm-use-follow_pfnmap-api.patch s390-pci_mmio-use-follow_pfnmap-api.patch mm-x86-pat-use-the-new-follow_pfnmap-api.patch vfio-use-the-new-follow_pfnmap-api.patch acrn-use-the-new-follow_pfnmap-api.patch mm-access_process_vm-use-the-new-follow_pfnmap-api.patch mm-remove-follow_pte.patch mm-x86-support-large-pfn-mappings.patch mm-arm64-support-large-pfn-mappings.patch