On Thu, Jun 27, 2024 at 3:42 AM Frederic Weisbecker <frederic@xxxxxxxxxx> wrote: > > Le Wed, Jun 26, 2024 at 10:49:58PM +0530, Neeraj upadhyay a écrit : > > On Wed, Jun 26, 2024 at 7:58 PM Frederic Weisbecker <frederic@xxxxxxxxxx> wrote: > > > > > > Le Wed, Jun 12, 2024 at 02:14:14PM +0530, Neeraj upadhyay a écrit : > > > > On Wed, Jun 5, 2024 at 3:58 AM Paul E. McKenney <paulmck@xxxxxxxxxx> wrote: > > > > > > > > > > From: Frederic Weisbecker <frederic@xxxxxxxxxx> > > > > > > > > > > When the grace period kthread checks the extended quiescent state > > > > > counter of a CPU, full ordering is necessary to ensure that either: > > > > > > > > > > * If the GP kthread observes the remote target in an extended quiescent > > > > > state, then that target must observe all accesses prior to the current > > > > > grace period, including the current grace period sequence number, once > > > > > it exits that extended quiescent state. > > > > > > > > > > or: > > > > > > > > > > * If the GP kthread observes the remote target NOT in an extended > > > > > quiescent state, then the target further entering in an extended > > > > > quiescent state must observe all accesses prior to the current > > > > > grace period, including the current grace period sequence number, once > > > > > it enters that extended quiescent state. > > > > > > > > > > This ordering is enforced through a full memory barrier placed right > > > > > before taking the first EQS snapshot. However this is superfluous > > > > > because the snapshot is taken while holding the target's rnp lock which > > > > > provides the necessary ordering through its chain of > > > > > smp_mb__after_unlock_lock(). > > > > > > > > > > Remove the needless explicit barrier before the snapshot and put a > > > > > comment about the implicit barrier newly relied upon here. > > > > > > > > > > Signed-off-by: Frederic Weisbecker <frederic@xxxxxxxxxx> > > > > > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > > > > > --- > > > > > kernel/rcu/tree_exp.h | 8 +++++++- > > > > > 1 file changed, 7 insertions(+), 1 deletion(-) > > > > > > > > > > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h > > > > > index 8a1d9c8bd9f74..bec24ea6777e8 100644 > > > > > --- a/kernel/rcu/tree_exp.h > > > > > +++ b/kernel/rcu/tree_exp.h > > > > > @@ -357,7 +357,13 @@ static void __sync_rcu_exp_select_node_cpus(struct rcu_exp_work *rewp) > > > > > !(rnp->qsmaskinitnext & mask)) { > > > > > mask_ofl_test |= mask; > > > > > } else { > > > > > - snap = rcu_dynticks_snap(cpu); > > > > > + /* > > > > > + * Full ordering against accesses prior current GP and > > > > > + * also against current GP sequence number is enforced > > > > > + * by current rnp locking with chained > > > > > + * smp_mb__after_unlock_lock(). > > > > > > > > Again, worth mentioning the chaining sites sync_exp_reset_tree() and > > > > this function? > > > > > > How about this? > > > > > > > Looks good to me, thanks! > > And similar to the previous one, a last minute edition: > > /* > * Full ordering between remote CPU's post idle accesses > * and updater's accesses prior to current GP (and also > * the started GP sequence number) is enforced by > * rcu_seq_start() implicit barrier, relayed by kworkers > * locking and even further by smp_mb__after_unlock_lock() > * barriers chained all the way throughout the rnp locking > * tree since sync_exp_reset_tree() and up to the current > * leaf rnp locking. > * > * Ordering between remote CPU's pre idle accesses and > * post grace period updater's accesses is enforced by the > * below acquire semantic. > */ > > Still ok? > Yes, looks good, thanks. Thanks Neeraj > Thanks.