On 30/09/21 00:10, Frederic Weisbecker wrote: > Instead of hardcoding IRQ save and nocb lock, use the consolidated > API. > > Signed-off-by: Frederic Weisbecker <frederic@xxxxxxxxxx> > Cc: Valentin Schneider <valentin.schneider@xxxxxxx> > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > Cc: Josh Triplett <josh@xxxxxxxxxxxxxxxx> > Cc: Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> > Cc: Boqun Feng <boqun.feng@xxxxxxxxx> > Cc: Neeraj Upadhyay <neeraju@xxxxxxxxxxxxxx> > Cc: Uladzislau Rezki <urezki@xxxxxxxxx> > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Just one comment nit below. Reviewed-by: Valentin Schneider <valentin.schneider@xxxxxxx> > --- > kernel/rcu/tree.c | 6 ++---- > 1 file changed, 2 insertions(+), 4 deletions(-) > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index b1fc6e498d90..1971a4e15e96 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -2492,8 +2492,7 @@ static void rcu_do_batch(struct rcu_data *rdp) While at it: - * Extract the list of ready callbacks, disabling to prevent +- * Extract the list of ready callbacks, disabling IRQs to prevent > * races with call_rcu() from interrupt handlers. Leave the > * callback counts, as rcu_barrier() needs to be conservative. > */ > - local_irq_save(flags); > - rcu_nocb_lock(rdp); > + rcu_nocb_lock_irqsave(rdp, flags); > WARN_ON_ONCE(cpu_is_offline(smp_processor_id())); > pending = rcu_segcblist_n_cbs(&rdp->cblist); > div = READ_ONCE(rcu_divisor);