On Wed, Mar 22, 2023 at 08:44:56PM +0100, Frederic Weisbecker wrote: > Callbacks can only be queued as lazy on NOCB CPUs, therefore iterating > over the NOCB mask is enough for both counting and scanning. Just lock > the mostly uncontended barrier mutex on counting as well in order to > keep rcu_nocb_mask stable. > Reviewed-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> thanks, - Joel > Signed-off-by: Frederic Weisbecker <frederic@xxxxxxxxxx> > --- > kernel/rcu/tree_nocb.h | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h > index a3dc7465b0b2..185c0c9a60d4 100644 > --- a/kernel/rcu/tree_nocb.h > +++ b/kernel/rcu/tree_nocb.h > @@ -1319,13 +1319,21 @@ lazy_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc) > int cpu; > unsigned long count = 0; > > + if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask))) > + return 0; > + > + /* Protect rcu_nocb_mask against concurrent (de-)offloading. */ > + mutex_lock(&rcu_state.barrier_mutex); > + > /* Snapshot count of all CPUs */ > - for_each_possible_cpu(cpu) { > + for_each_cpu(cpu, rcu_nocb_mask) { > struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); > > count += READ_ONCE(rdp->lazy_len); > } > > + mutex_unlock(&rcu_state.barrier_mutex); > + > return count ? count : SHRINK_EMPTY; > } > > @@ -1336,6 +1344,8 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) > unsigned long flags; > unsigned long count = 0; > > + if (WARN_ON_ONCE(!cpumask_available(rcu_nocb_mask))) > + return 0; > /* > * Protect against concurrent (de-)offloading. Otherwise nocb locking > * may be ignored or imbalanced. > @@ -1343,7 +1353,7 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) > mutex_lock(&rcu_state.barrier_mutex); > > /* Snapshot count of all CPUs */ > - for_each_possible_cpu(cpu) { > + for_each_cpu(cpu, rcu_nocb_mask) { > struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); > int _count; > > -- > 2.34.1 >