On Fri, Feb 14, 2020 at 04:25:18PM -0800, paulmck@xxxxxxxxxx wrote: > From: "Paul E. McKenney" <paulmck@xxxxxxxxxx> > > The RCU tasks list of callbacks, rcu_tasks_cbs_head, is sampled locklessly > by rcu_tasks_kthread() when waiting for work to do. This commit therefore > applies READ_ONCE() to that lockless sampling and WRITE_ONCE() to the > single potential store outside of rcu_tasks_kthread. > > This data race was reported by KCSAN. Not appropriate for backporting > due to failure being unlikely. What failure is possible here? AFAICT this is (again) one of them load-complare-against-constant-discard patterns that are impossible to mess up. > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > --- > kernel/rcu/update.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c > index 6c4b862..a27df76 100644 > --- a/kernel/rcu/update.c > +++ b/kernel/rcu/update.c > @@ -528,7 +528,7 @@ void call_rcu_tasks(struct rcu_head *rhp, rcu_callback_t func) > rhp->func = func; > raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags); > needwake = !rcu_tasks_cbs_head; > - *rcu_tasks_cbs_tail = rhp; > + WRITE_ONCE(*rcu_tasks_cbs_tail, rhp); > rcu_tasks_cbs_tail = &rhp->next; > raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags); > /* We can't create the thread unless interrupts are enabled. */ > @@ -658,7 +658,7 @@ static int __noreturn rcu_tasks_kthread(void *arg) > /* If there were none, wait a bit and start over. */ > if (!list) { > wait_event_interruptible(rcu_tasks_cbs_wq, > - rcu_tasks_cbs_head); > + READ_ONCE(rcu_tasks_cbs_head)); > if (!rcu_tasks_cbs_head) { > WARN_ON(signal_pending(current)); > schedule_timeout_interruptible(HZ/10); > -- > 2.9.5 >