On Wed, Mar 22, 2023 at 08:44:55PM +0100, Frederic Weisbecker wrote: > The ->lazy_len is only checked locklessly. Recheck again under the > ->nocb_lock to avoid spending more time on flushing/waking if not > necessary. The ->lazy_len can still increment concurrently (from 1 to > infinity) but under the ->nocb_lock we at least know for sure if there > are lazy callbacks at all (->lazy_len > 0). > > Signed-off-by: Frederic Weisbecker <frederic@xxxxxxxxxx> This one looks plausible, and might hold the answer to earlier questions. Thanx, Paul > --- > kernel/rcu/tree_nocb.h | 16 ++++++++++++---- > 1 file changed, 12 insertions(+), 4 deletions(-) > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h > index cb57e8312231..a3dc7465b0b2 100644 > --- a/kernel/rcu/tree_nocb.h > +++ b/kernel/rcu/tree_nocb.h > @@ -1350,12 +1350,20 @@ lazy_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) > if (!rcu_rdp_is_offloaded(rdp)) > continue; > > + if (!READ_ONCE(rdp->lazy_len)) > + continue; > + > + rcu_nocb_lock_irqsave(rdp, flags); > + /* > + * Recheck under the nocb lock. Since we are not holding the bypass > + * lock we may still race with increments from the enqueuer but still > + * we know for sure if there is at least one lazy callback. > + */ > _count = READ_ONCE(rdp->lazy_len); > - > - if (_count == 0) > + if (!_count) { > + rcu_nocb_unlock_irqrestore(rdp, flags); > continue; > - > - rcu_nocb_lock_irqsave(rdp, flags); > + } > WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, jiffies, false)); > rcu_nocb_unlock_irqrestore(rdp, flags); > wake_nocb_gp(rdp, false); > -- > 2.34.1 >