The patch titled Subject: arm: allow pte_offset_map[_lock]() to fail has been added to the -mm mm-unstable branch. Its filename is arm-allow-pte_offset_map-to-fail.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/arm-allow-pte_offset_map-to-fail.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Hugh Dickins <hughd@xxxxxxxxxx> Subject: arm: allow pte_offset_map[_lock]() to fail Date: Thu, 8 Jun 2023 12:10:57 -0700 (PDT) Patch series "arch: allow pte_offset_map[_lock]() to fail", v2. What is it all about? Some mmap_lock avoidance i.e. latency reduction. Initially just for the case of collapsing shmem or file pages to THPs; but likely to be relied upon later in other contexts e.g. freeing of empty page tables (but that's not work I'm doing). mmap_write_lock avoidance when collapsing to anon THPs? Perhaps, but again that's not work I've done: a quick attempt was not as easy as the shmem/file case. I would much prefer not to have to make these small but wide-ranging changes for such a niche case; but failed to find another way, and have heard that shmem MADV_COLLAPSE's usefulness is being limited by that mmap_write_lock it currently requires. These changes (though of course not these exact patches, and not all of these architectures!) have been in Google's data centre kernel for three years now: we do rely upon them. What are the per-arch changes about? Generally, two things. One: the current mmap locking may not be enough to guard against that tricky transition between pmd entry pointing to page table, and empty pmd entry, and pmd entry pointing to huge page: pte_offset_map() will have to validate the pmd entry for itself, returning NULL if no page table is there. What to do about that varies: often the nearby error handling indicates just to skip it; but in some cases a "goto again" looks appropriate (and if that risks an infinite loop, then there must have been an oops, or pfn 0 mistaken for page table, before). Deeper study of each site might show that 90% of them here in arch code could only fail if there's corruption e.g. a transition to THP would be surprising on an arch without HAVE_ARCH_TRANSPARENT_HUGEPAGE. But given the likely extension to freeing empty page tables, I have not limited this set of changes to THP; and it has been easier, and sets a better example, if each site is given appropriate handling. Two: pte_offset_map() will need to do an rcu_read_lock(), with the corresponding rcu_read_unlock() in pte_unmap(). But most architectures never supported CONFIG_HIGHPTE, so some don't always call pte_unmap() after pte_offset_map(), or have used userspace pte_offset_map() where pte_offset_kernel() is more correct. No problem in the current tree, but a problem once an rcu_read_unlock() will be needed to keep balance. A common special case of that comes in arch/*/mm/hugetlbpage.c, if the architecture supports hugetlb pages down at the lowest PTE level. huge_pte_alloc() uses pte_alloc_map(), but generic hugetlb code does no corresponding pte_unmap(); similarly for huge_pte_offset(). In rare transient cases, not yet made possible, pte_offset_map() and pte_offset_map_lock() may not find a page table: handle appropriately. Link: https://lkml.kernel.org/r/a4963be9-7aa6-350-66d0-2ba843e1af44@xxxxxxxxxx Link: https://lkml.kernel.org/r/813429a1-204a-1844-eeae-7fd72826c28@xxxxxxxxxx Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Alexander Gordeev <agordeev@xxxxxxxxxxxxx> Cc: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx> Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Christian Borntraeger <borntraeger@xxxxxxxxxxxxx> Cc: Chris Zankel <chris@xxxxxxxxxx> Cc: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: "David S. Miller" <davem@xxxxxxxxxxxxx> Cc: Geert Uytterhoeven <geert@xxxxxxxxxxxxxx> Cc: Greg Ungerer <gerg@xxxxxxxxxxxxxx> Cc: Heiko Carstens <hca@xxxxxxxxxxxxx> Cc: Helge Deller <deller@xxxxxx> Cc: John David Anglin <dave.anglin@xxxxxxxx> Cc: John Paul Adrian Glaubitz <glaubitz@xxxxxxxxxxxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Max Filippov <jcmvbkbc@xxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Michal Simek <monstr@xxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Mike Rapoport (IBM) <rppt@xxxxxxxxxx> Cc: Palmer Dabbelt <palmer@xxxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> Cc: Russell King <linux@xxxxxxxxxxxxxxx> Cc: Suren Baghdasaryan <surenb@xxxxxxxxxx> Cc: Thomas Bogendoerfer <tsbogend@xxxxxxxxxxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxx> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Palmer Dabbelt <palmer@xxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/arm/lib/uaccess_with_memcpy.c | 3 +++ arch/arm/mm/fault-armv.c | 5 ++++- arch/arm/mm/fault.c | 3 +++ 3 files changed, 10 insertions(+), 1 deletion(-) --- a/arch/arm/lib/uaccess_with_memcpy.c~arm-allow-pte_offset_map-to-fail +++ a/arch/arm/lib/uaccess_with_memcpy.c @@ -74,6 +74,9 @@ pin_page_for_write(const void __user *_a return 0; pte = pte_offset_map_lock(current->mm, pmd, addr, &ptl); + if (unlikely(!pte)) + return 0; + if (unlikely(!pte_present(*pte) || !pte_young(*pte) || !pte_write(*pte) || !pte_dirty(*pte))) { pte_unmap_unlock(pte, ptl); --- a/arch/arm/mm/fault-armv.c~arm-allow-pte_offset_map-to-fail +++ a/arch/arm/mm/fault-armv.c @@ -117,8 +117,11 @@ static int adjust_pte(struct vm_area_str * must use the nested version. This also means we need to * open-code the spin-locking. */ - ptl = pte_lockptr(vma->vm_mm, pmd); pte = pte_offset_map(pmd, address); + if (!pte) + return 0; + + ptl = pte_lockptr(vma->vm_mm, pmd); do_pte_lock(ptl); ret = do_adjust_pte(vma, address, pfn, pte); --- a/arch/arm/mm/fault.c~arm-allow-pte_offset_map-to-fail +++ a/arch/arm/mm/fault.c @@ -85,6 +85,9 @@ void show_pte(const char *lvl, struct mm break; pte = pte_offset_map(pmd, addr); + if (!pte) + break; + pr_cont(", *pte=%08llx", (long long)pte_val(*pte)); #ifndef CONFIG_ARM_LPAE pr_cont(", *ppte=%08llx", _ Patches currently in -mm which might be from hughd@xxxxxxxxxx are arm-allow-pte_offset_map-to-fail.patch arm64-allow-pte_offset_map-to-fail.patch arm64-hugetlb-pte_alloc_huge-pte_offset_huge.patch ia64-hugetlb-pte_alloc_huge-pte_offset_huge.patch m68k-allow-pte_offset_map-to-fail.patch microblaze-allow-pte_offset_map-to-fail.patch mips-update_mmu_cache-can-replace-__update_tlb.patch parisc-add-pte_unmap-to-balance-get_ptep.patch parisc-unmap_uncached_pte-use-pte_offset_kernel.patch parisc-hugetlb-pte_alloc_huge-pte_offset_huge.patch powerpc-kvmppc_unmap_free_pmd-pte_offset_kernel.patch powerpc-allow-pte_offset_map-to-fail.patch powerpc-hugetlb-pte_alloc_huge.patch riscv-hugetlb-pte_alloc_huge-pte_offset_huge.patch s390-allow-pte_offset_map_lock-to-fail.patch s390-gmap-use-pte_unmap_unlock-not-spin_unlock.patch sh-hugetlb-pte_alloc_huge-pte_offset_huge.patch sparc-hugetlb-pte_alloc_huge-pte_offset_huge.patch sparc-allow-pte_offset_map-to-fail.patch sparc-iounit-and-iommu-use-pte_offset_kernel.patch x86-allow-get_locked_pte-to-fail.patch x86-sme_populate_pgd-use-pte_offset_kernel.patch xtensa-add-pte_unmap-to-balance-pte_offset_map.patch