Le Tue, Jul 02, 2024 at 04:19:36PM -0700, Boqun Feng a écrit : > On Thu, May 30, 2024 at 03:45:52PM +0200, Frederic Weisbecker wrote: > > Now that the (de-)offloading process can only apply to offline CPUs, > > there is no more concurrency between rcu_core and nocb kthreads. Also > > the mutation now happens on empty queues. > > > > Therefore the state machine can be reduced to a single bit called > > SEGCBLIST_OFFLOADED. Simplify the transition as follows: > > > > * Upon offloading: queue the rdp to be added to the rcuog list and > > wait for the rcuog kthread to set the SEGCBLIST_OFFLOADED bit. Unpark > > rcuo kthread. > > > > * Upon de-offloading: Park rcuo kthread. Queue the rdp to be removed > > from the rcuog list and wait for the rcuog kthread to clear the > > SEGCBLIST_OFFLOADED bit. > > > > Signed-off-by: Frederic Weisbecker <frederic@xxxxxxxxxx> > > --- > [...] > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h > > index 24daf606de0c..72a2990d2087 100644 > > --- a/kernel/rcu/tree_nocb.h > > +++ b/kernel/rcu/tree_nocb.h > [...] > > @@ -1079,29 +1080,14 @@ static int rcu_nocb_rdp_deoffload(struct rcu_data *rdp) > > * but we stick to paranoia in this rare path. > > */ > > rcu_nocb_lock_irqsave(rdp, flags); > > - rcu_segcblist_clear_flags(&rdp->cblist, SEGCBLIST_KTHREAD_GP); > > - rcu_nocb_unlock_irqrestore(rdp, flags); > > + rcu_segcblist_clear_flags(&rdp->cblist, SEGCBLIST_OFFLOADED); > > + raw_spin_unlock_irqrestore(&rdp->nocb_lock, flags); > > > > Dropping rdp->nocb_lock unconditionally means we are the holder of it, > right? If so, I think we better replace the above > rcu_nocb_lock_irqsave() with raw_spin_lock_irqsave(). Good point, I'll do that. Thanks.