The patch titled Subject: sparc/mm: disable preemption in lazy mmu mode has been added to the -mm mm-unstable branch. Its filename is sparc-mm-disable-preemption-in-lazy-mmu-mode.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/sparc-mm-disable-preemption-in-lazy-mmu-mode.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Ryan Roberts <ryan.roberts@xxxxxxx> Subject: sparc/mm: disable preemption in lazy mmu mode Date: Mon, 3 Mar 2025 14:15:37 +0000 Since commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates") it's been possible for arch_[enter|leave]_lazy_mmu_mode() to be called without holding a page table lock (for the kernel mappings case), and therefore it is possible that preemption may occur while in the lazy mmu mode. The Sparc lazy mmu implementation is not robust to preemption since it stores the lazy mode state in a per-cpu structure and does not attempt to manage that state on task switch. Powerpc had the same issue and fixed it by explicitly disabling preemption in arch_enter_lazy_mmu_mode() and re-enabling in arch_leave_lazy_mmu_mode(). See commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode"). Given Sparc's lazy mmu mode is based on powerpc's, let's fix it in the same way here. Link: https://lkml.kernel.org/r/20250303141542.3371656-4-ryan.roberts@xxxxxxx Fixes: 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates") Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> Acked-by: David Hildenbrand <david@xxxxxxxxxx> Acked-by: Andreas Larsson <andreas@xxxxxxxxxxx> Cc: Borislav Betkov <bp@xxxxxxxxx> Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: David S. Miller <davem@xxxxxxxxxxxxx> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Juegren Gross <jgross@xxxxxxxx> Cc: Matthew Wilcow (Oracle) <willy@xxxxxxxxxxxxx> Cc: Thomas Gleinxer <tglx@xxxxxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/sparc/mm/tlb.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) --- a/arch/sparc/mm/tlb.c~sparc-mm-disable-preemption-in-lazy-mmu-mode +++ a/arch/sparc/mm/tlb.c @@ -52,8 +52,10 @@ out: void arch_enter_lazy_mmu_mode(void) { - struct tlb_batch *tb = this_cpu_ptr(&tlb_batch); + struct tlb_batch *tb; + preempt_disable(); + tb = this_cpu_ptr(&tlb_batch); tb->active = 1; } @@ -64,6 +66,7 @@ void arch_leave_lazy_mmu_mode(void) if (tb->tlb_nr) flush_tlb_pending(); tb->active = 0; + preempt_enable(); } static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr, _ Patches currently in -mm which might be from ryan.roberts@xxxxxxx are mm-dont-skip-arch_sync_kernel_mappings-in-error-paths.patch mm-ioremap-pass-pgprot_t-to-ioremap_prot-instead-of-unsigned-long.patch mm-fix-lazy-mmu-docs-and-usage.patch fs-proc-task_mmu-reduce-scope-of-lazy-mmu-region.patch sparc-mm-disable-preemption-in-lazy-mmu-mode.patch sparc-mm-avoid-calling-arch_enter-leave_lazy_mmu-in-set_ptes.patch revert-x86-xen-allow-nesting-of-same-lazy-mode.patch