On Wed, Jul 21, 2021 at 12:51:17PM +0100, Valentin Schneider wrote: > Signed-off-by: Valentin Schneider <valentin.schneider@xxxxxxx> > --- > kernel/rcu/tree_plugin.h | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > index ad0156b86937..6c3c4100da83 100644 > --- a/kernel/rcu/tree_plugin.h > +++ b/kernel/rcu/tree_plugin.h > @@ -70,8 +70,7 @@ static bool rcu_rdp_is_offloaded(struct rcu_data *rdp) > !(lockdep_is_held(&rcu_state.barrier_mutex) || > (IS_ENABLED(CONFIG_HOTPLUG_CPU) && lockdep_is_cpus_held()) || > rcu_lockdep_is_held_nocb(rdp) || > - (rdp == this_cpu_ptr(&rcu_data) && > - !(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible())) || > + (rdp == this_cpu_ptr(&rcu_data) && is_pcpu_safe()) || I fear that won't work. We really need any caller of rcu_rdp_is_offloaded() on the local rdp to have preemption disabled and not just migration disabled, because we must protect against concurrent offloaded state changes. The offloaded state is changed by a workqueue that executes on the target rdp. Here is a practical example where it matters: CPU 0 ----- // =======> task rcuc running rcu_core { rcu_nocb_lock_irqsave(rdp, flags) { if (!rcu_segcblist_is_offloaded(rdp->cblist)) { // is not offloaded right now, so it's going // to just disable IRQs. Oh no wait: // preemption // ========> workqueue running rcu_nocb_rdp_offload(); // ========> task rcuc resume local_irq_disable(); } } .... rcu_nocb_unlock_irqrestore(rdp, flags) { if (rcu_segcblist_is_offloaded(rdp->cblist)) { // is offloaded right now so: raw_spin_unlock_irqrestore(rdp, flags); And that will explode because that's an impaired unlock on nocb_lock.