On Tue, Sep 20, 2022 at 5:39 PM Frederic Weisbecker <frederic@xxxxxxxxxx> wrote: > > On Thu, Sep 15, 2022 at 01:58:24PM +0800, Pingfan Liu wrote: > > During offlining, the concurrent rcutree_offline_cpu() can not be aware > > of each other through ->qsmaskinitnext. But cpu_dying_mask carries such > > information at that point and can be utilized. > > > > Besides, a trivial change which removes the redudant call to > > rcu_boost_kthread_setaffinity() in rcutree_dead_cpu() since > > rcutree_offline_cpu() can fully serve that purpose. > > > > Signed-off-by: Pingfan Liu <kernelfans@xxxxxxxxx> > > Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxx> > > Cc: David Woodhouse <dwmw@xxxxxxxxxxxx> > > Cc: Frederic Weisbecker <frederic@xxxxxxxxxx> > > Cc: Neeraj Upadhyay <quic_neeraju@xxxxxxxxxxx> > > Cc: Josh Triplett <josh@xxxxxxxxxxxxxxxx> > > Cc: Steven Rostedt <rostedt@xxxxxxxxxxx> > > Cc: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> > > Cc: Lai Jiangshan <jiangshanlai@xxxxxxxxx> > > Cc: Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> > > Cc: "Jason A. Donenfeld" <Jason@xxxxxxxxx> > > To: rcu@xxxxxxxxxxxxxxx > > --- > > kernel/rcu/tree.c | 2 -- > > kernel/rcu/tree_plugin.h | 6 ++++++ > > 2 files changed, 6 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > index 79aea7df4345..8a829b64f5b2 100644 > > --- a/kernel/rcu/tree.c > > +++ b/kernel/rcu/tree.c > > @@ -2169,8 +2169,6 @@ int rcutree_dead_cpu(unsigned int cpu) > > return 0; > > > > WRITE_ONCE(rcu_state.n_online_cpus, rcu_state.n_online_cpus - 1); > > - /* Adjust any no-longer-needed kthreads. */ > > - rcu_boost_kthread_setaffinity(rnp, -1); > > // Stop-machine done, so allow nohz_full to disable tick. > > tick_dep_clear(TICK_DEP_BIT_RCU); > > return 0; > > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > > index ef6d3ae239b9..e5afc63bd97f 100644 > > --- a/kernel/rcu/tree_plugin.h > > +++ b/kernel/rcu/tree_plugin.h > > @@ -1243,6 +1243,12 @@ static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu) > > cpu != outgoingcpu) > > cpumask_set_cpu(cpu, cm); > > cpumask_and(cm, cm, housekeeping_cpumask(HK_TYPE_RCU)); > > + /* > > + * For concurrent offlining, bit of qsmaskinitnext is not cleared yet. > > For clarification, the comment could be: > > While concurrently offlining, rcu_report_dead() can race, making > ->qsmaskinitnext unstable. So rely on cpu_dying_mask which is stable > and already contains all the currently offlining CPUs. > It is a neat description. Thanks, Pingfan