Patch "rcu: Mark additional concurrent load from ->cpu_no_qs.b.exp" has been added to the 6.4-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    rcu: Mark additional concurrent load from ->cpu_no_qs.b.exp

to the 6.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     rcu-mark-additional-concurrent-load-from-cpu_no_qs.b.patch
and it can be found in the queue-6.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit c2695efafc87a2ebcdaa8213853f069251cdf6dc
Author: Paul E. McKenney <paulmck@xxxxxxxxxx>
Date:   Fri Apr 7 16:05:38 2023 -0700

    rcu: Mark additional concurrent load from ->cpu_no_qs.b.exp
    
    [ Upstream commit 9146eb25495ea8bfb5010192e61e3ed5805ce9ef ]
    
    The per-CPU rcu_data structure's ->cpu_no_qs.b.exp field is updated
    only on the instance corresponding to the current CPU, but can be read
    more widely.  Unmarked accesses are OK from the corresponding CPU, but
    only if interrupts are disabled, given that interrupt handlers can and
    do modify this field.
    
    Unfortunately, although the load from rcu_preempt_deferred_qs() is always
    carried out from the corresponding CPU, interrupts are not necessarily
    disabled.  This commit therefore upgrades this load to READ_ONCE.
    
    Similarly, the diagnostic access from synchronize_rcu_expedited_wait()
    might run with interrupts disabled and from some other CPU.  This commit
    therefore marks this load with data_race().
    
    Finally, the C-language access in rcu_preempt_ctxt_queue() is OK as
    is because interrupts are disabled and this load is always from the
    corresponding CPU.  This commit adds a comment giving the rationale for
    this access being safe.
    
    This data race was reported by KCSAN.  Not appropriate for backporting
    due to failure being unlikely.
    
    Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index 3b7abb58157df..8239b39d945bd 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -643,7 +643,7 @@ static void synchronize_rcu_expedited_wait(void)
 					"O."[!!cpu_online(cpu)],
 					"o."[!!(rdp->grpmask & rnp->expmaskinit)],
 					"N."[!!(rdp->grpmask & rnp->expmaskinitnext)],
-					"D."[!!(rdp->cpu_no_qs.b.exp)]);
+					"D."[!!data_race(rdp->cpu_no_qs.b.exp)]);
 			}
 		}
 		pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n",
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 7b0fe741a0886..41021080ad258 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -257,6 +257,8 @@ static void rcu_preempt_ctxt_queue(struct rcu_node *rnp, struct rcu_data *rdp)
 	 * GP should not be able to end until we report, so there should be
 	 * no need to check for a subsequent expedited GP.  (Though we are
 	 * still in a quiescent state in any case.)
+	 *
+	 * Interrupts are disabled, so ->cpu_no_qs.b.exp cannot change.
 	 */
 	if (blkd_state & RCU_EXP_BLKD && rdp->cpu_no_qs.b.exp)
 		rcu_report_exp_rdp(rdp);
@@ -941,7 +943,7 @@ notrace void rcu_preempt_deferred_qs(struct task_struct *t)
 {
 	struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
 
-	if (rdp->cpu_no_qs.b.exp)
+	if (READ_ONCE(rdp->cpu_no_qs.b.exp))
 		rcu_report_exp_rdp(rdp);
 }
 



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux