The patch titled Subject: lazy tlb: fix hotplug exit race with MMU_LAZY_TLB_SHOOTDOWN has been added to the -mm mm-hotfixes-unstable branch. Its filename is lazy-tlb-fix-hotplug-exit-race-with-mmu_lazy_tlb_shootdown.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/lazy-tlb-fix-hotplug-exit-race-with-mmu_lazy_tlb_shootdown.patch This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Nicholas Piggin <npiggin@xxxxxxxxx> Subject: lazy tlb: fix hotplug exit race with MMU_LAZY_TLB_SHOOTDOWN Date: Wed, 24 May 2023 16:04:54 +1000 CPU unplug first calls __cpu_disable(), and that's where powerpc calls cleanup_cpu_mmu_context(), which clears this CPU from mm_cpumask() of all mms in the system. However this CPU may still be using a lazy tlb mm, and its mm_cpumask bit will be cleared from it. The CPU does not switch away from the lazy tlb mm until arch_cpu_idle_dead() calls idle_task_exit(). If that user mm exits in this window, it will not be subject to the lazy tlb mm shootdown and may be freed while in use as a lazy mm by the CPU that is being unplugged. cleanup_cpu_mmu_context() could be moved later, but it looks better to move the lazy tlb mm switching earlier. The problem with doing the lazy mm switching in idle_task_exit() is explained in commit bf2c59fce4074 ("sched/core: Fix illegal RCU from offline CPUs"), which added a wart to switch away from the mm but leave it set in active_mm to be cleaned up later. So instead, switch away from the lazy tlb mm on the stopper kthread before the CPU is taken down. This CPU will never switch to a user thread from this point, so it has no chance to pick up a new lazy tlb mm. This removes the lazy tlb mm handling wart in CPU unplug. idle_task_exit() remains to reduce churn in the patch. It could be removed entirely after this because finish_cpu() makes a similar check. finish_cpu() itself is not strictly needed because init_mm will never have its refcount drop to zero. But it is conceptually nicer to keep it rather than have the idle thread drop the reference on the mm it is using. Link: https://lkml.kernel.org/r/20230524060455.147699-1-npiggin@xxxxxxxxx Fixes: 2655421ae69fa ("lazy tlb: shoot lazies, non-refcounting lazy tlb mm reference handling scheme") Signed-off-by: Nicholas Piggin <npiggin@xxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/sched/hotplug.h | 2 ++ kernel/cpu.c | 11 +++++++---- kernel/sched/core.c | 24 +++++++++++++++++++----- 3 files changed, 28 insertions(+), 9 deletions(-) --- a/include/linux/sched/hotplug.h~lazy-tlb-fix-hotplug-exit-race-with-mmu_lazy_tlb_shootdown +++ a/include/linux/sched/hotplug.h @@ -19,8 +19,10 @@ extern int sched_cpu_dying(unsigned int #endif #ifdef CONFIG_HOTPLUG_CPU +extern void idle_task_prepare_exit(void); extern void idle_task_exit(void); #else +static inline void idle_task_prepare_exit(void) {} static inline void idle_task_exit(void) {} #endif --- a/kernel/cpu.c~lazy-tlb-fix-hotplug-exit-race-with-mmu_lazy_tlb_shootdown +++ a/kernel/cpu.c @@ -618,12 +618,13 @@ static int finish_cpu(unsigned int cpu) struct mm_struct *mm = idle->active_mm; /* - * idle_task_exit() will have switched to &init_mm, now - * clean up any remaining active_mm state. + * idle_task_prepare_exit() ensured the idle task was using + * &init_mm. Now that the CPU has stopped, drop that refcount. */ - if (mm != &init_mm) - idle->active_mm = &init_mm; + WARN_ON(mm != &init_mm); + idle->active_mm = NULL; mmdrop_lazy_tlb(mm); + return 0; } @@ -1030,6 +1031,8 @@ static int take_cpu_down(void *_param) enum cpuhp_state target = max((int)st->target, CPUHP_AP_OFFLINE); int err, cpu = smp_processor_id(); + idle_task_prepare_exit(); + /* Ensure this CPU doesn't handle any more interrupts. */ err = __cpu_disable(); if (err < 0) --- a/kernel/sched/core.c~lazy-tlb-fix-hotplug-exit-race-with-mmu_lazy_tlb_shootdown +++ a/kernel/sched/core.c @@ -9373,19 +9373,33 @@ void sched_setnuma(struct task_struct *p * Ensure that the idle task is using init_mm right before its CPU goes * offline. */ -void idle_task_exit(void) +void idle_task_prepare_exit(void) { struct mm_struct *mm = current->active_mm; - BUG_ON(cpu_online(smp_processor_id())); - BUG_ON(current != this_rq()->idle); + WARN_ON(!irqs_disabled()); if (mm != &init_mm) { - switch_mm(mm, &init_mm, current); + mmgrab_lazy_tlb(&init_mm); + current->active_mm = &init_mm; + switch_mm_irqs_off(mm, &init_mm, current); finish_arch_post_lock_switch(); + mmdrop_lazy_tlb(mm); } + /* finish_cpu() will mmdrop the init_mm ref after this CPU stops */ +} + +/* + * After the CPU is offline, double check that it was previously switched to + * init_mm. This call can be removed because the condition is caught in + * finish_cpu() as well. + */ +void idle_task_exit(void) +{ + BUG_ON(cpu_online(smp_processor_id())); + BUG_ON(current != this_rq()->idle); - /* finish_cpu(), as ran on the BP, will clean up the active_mm state */ + WARN_ON_ONCE(current->active_mm != &init_mm); } static int __balance_push_cpu_stop(void *arg) _ Patches currently in -mm which might be from npiggin@xxxxxxxxx are lazy-tlb-fix-hotplug-exit-race-with-mmu_lazy_tlb_shootdown.patch lazy-tlb-consolidate-lazy-tlb-mm-switching.patch