Patch "powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mm" has been added to the 5.4-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mm

to the 5.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-64s-radix-fix-mm_cpumask-trimming-race-vs-kt.patch
and it can be found in the queue-5.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 1930f7ff98bac2c0fb05b788451c2469b7f46df5
Author: Nicholas Piggin <npiggin@xxxxxxxxx>
Date:   Mon Sep 14 14:52:19 2020 +1000

    powerpc/64s/radix: Fix mm_cpumask trimming race vs kthread_use_mm
    
    [ Upstream commit a665eec0a22e11cdde708c1c256a465ebe768047 ]
    
    Commit 0cef77c7798a7 ("powerpc/64s/radix: flush remote CPUs out of
    single-threaded mm_cpumask") added a mechanism to trim the mm_cpumask of
    a process under certain conditions. One of the assumptions is that
    mm_users would not be incremented via a reference outside the process
    context with mmget_not_zero() then go on to kthread_use_mm() via that
    reference.
    
    That invariant was broken by io_uring code (see previous sparc64 fix),
    but I'll point Fixes: to the original powerpc commit because we are
    changing that assumption going forward, so this will make backports
    match up.
    
    Fix this by no longer relying on that assumption, but by having each CPU
    check the mm is not being used, and clearing their own bit from the mask
    only if it hasn't been switched-to by the time the IPI is processed.
    
    This relies on commit 38cf307c1f20 ("mm: fix kthread_use_mm() vs TLB
    invalidate") and ARCH_WANT_IRQS_OFF_ACTIVATE_MM to disable irqs over mm
    switch sequences.
    
    Fixes: 0cef77c7798a7 ("powerpc/64s/radix: flush remote CPUs out of single-threaded mm_cpumask")
    Signed-off-by: Nicholas Piggin <npiggin@xxxxxxxxx>
    Reviewed-by: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
    Depends-on: 38cf307c1f20 ("mm: fix kthread_use_mm() vs TLB invalidate")
    Signed-off-by: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
    Link: https://lore.kernel.org/r/20200914045219.3736466-5-npiggin@xxxxxxxxx
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h
index 7f3a8b9023254..02a1c18cdba3d 100644
--- a/arch/powerpc/include/asm/tlb.h
+++ b/arch/powerpc/include/asm/tlb.h
@@ -67,19 +67,6 @@ static inline int mm_is_thread_local(struct mm_struct *mm)
 		return false;
 	return cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm));
 }
-static inline void mm_reset_thread_local(struct mm_struct *mm)
-{
-	WARN_ON(atomic_read(&mm->context.copros) > 0);
-	/*
-	 * It's possible for mm_access to take a reference on mm_users to
-	 * access the remote mm from another thread, but it's not allowed
-	 * to set mm_cpumask, so mm_users may be > 1 here.
-	 */
-	WARN_ON(current->mm != mm);
-	atomic_set(&mm->context.active_cpus, 1);
-	cpumask_clear(mm_cpumask(mm));
-	cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm));
-}
 #else /* CONFIG_PPC_BOOK3S_64 */
 static inline int mm_is_thread_local(struct mm_struct *mm)
 {
diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
index 67af871190c6d..b0f240afffa22 100644
--- a/arch/powerpc/mm/book3s64/radix_tlb.c
+++ b/arch/powerpc/mm/book3s64/radix_tlb.c
@@ -639,19 +639,29 @@ static void do_exit_flush_lazy_tlb(void *arg)
 	struct mm_struct *mm = arg;
 	unsigned long pid = mm->context.id;
 
+	/*
+	 * A kthread could have done a mmget_not_zero() after the flushing CPU
+	 * checked mm_is_singlethreaded, and be in the process of
+	 * kthread_use_mm when interrupted here. In that case, current->mm will
+	 * be set to mm, because kthread_use_mm() setting ->mm and switching to
+	 * the mm is done with interrupts off.
+	 */
 	if (current->mm == mm)
-		return; /* Local CPU */
+		goto out_flush;
 
 	if (current->active_mm == mm) {
-		/*
-		 * Must be a kernel thread because sender is single-threaded.
-		 */
-		BUG_ON(current->mm);
+		WARN_ON_ONCE(current->mm != NULL);
+		/* Is a kernel thread and is using mm as the lazy tlb */
 		mmgrab(&init_mm);
-		switch_mm(mm, &init_mm, current);
 		current->active_mm = &init_mm;
+		switch_mm_irqs_off(mm, &init_mm, current);
 		mmdrop(mm);
 	}
+
+	atomic_dec(&mm->context.active_cpus);
+	cpumask_clear_cpu(smp_processor_id(), mm_cpumask(mm));
+
+out_flush:
 	_tlbiel_pid(pid, RIC_FLUSH_ALL);
 }
 
@@ -666,7 +676,6 @@ static void exit_flush_lazy_tlbs(struct mm_struct *mm)
 	 */
 	smp_call_function_many(mm_cpumask(mm), do_exit_flush_lazy_tlb,
 				(void *)mm, 1);
-	mm_reset_thread_local(mm);
 }
 
 void radix__flush_tlb_mm(struct mm_struct *mm)



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux