On Mon, Feb 17, 2020 at 03:06:57PM -0800, Paul E. McKenney wrote: > commit 52324a7b8a025f47a1a1a9fbd23ffe59fa764764 > Author: Paul E. McKenney <paulmck@xxxxxxxxxx> > Date: Fri Jan 3 11:42:05 2020 -0800 > > srcu: Hold srcu_struct ->lock when updating ->srcu_gp_seq > > A read of the srcu_struct structure's ->srcu_gp_seq field should not > need READ_ONCE() when that structure's ->lock is held. Except that this > lock is not always held when updating this field. This commit therefore > acquires the lock around updates and removes a now-unneeded READ_ONCE(). > > This data race was reported by KCSAN. > > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> Acked-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> > > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c > index 119a373..c19c1df 100644 > --- a/kernel/rcu/srcutree.c > +++ b/kernel/rcu/srcutree.c > @@ -450,7 +450,7 @@ static void srcu_gp_start(struct srcu_struct *ssp) > spin_unlock_rcu_node(sdp); /* Interrupts remain disabled. */ > smp_mb(); /* Order prior store to ->srcu_gp_seq_needed vs. GP start. */ > rcu_seq_start(&ssp->srcu_gp_seq); > - state = rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)); > + state = rcu_seq_state(ssp->srcu_gp_seq); > WARN_ON_ONCE(state != SRCU_STATE_SCAN1); > } > > @@ -1130,7 +1130,9 @@ static void srcu_advance_state(struct srcu_struct *ssp) > return; /* readers present, retry later. */ > } > srcu_flip(ssp); > + spin_lock_irq_rcu_node(ssp); > rcu_seq_set_state(&ssp->srcu_gp_seq, SRCU_STATE_SCAN2); > + spin_unlock_irq_rcu_node(ssp); > } > > if (rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) == SRCU_STATE_SCAN2) {