Le Wed, Feb 19, 2025 at 06:58:36AM -0800, Paul E. McKenney a écrit : > On Sat, Feb 15, 2025 at 11:23:45PM +0100, Frederic Weisbecker wrote: > > > Before. There was also some buggy debug code in play. Also, to get the > > > failure, it was necessary to make TREE03 disable preemption, as stock > > > TREE03 has an empty sync_sched_exp_online_cleanup() function. > > > > > > I am rerunning the test with a WARN_ON_ONCE() after the early exit from > > > the sync_sched_exp_online_cleanup(). Of course, lack of a failure does > > > not necessairly indicate > > > > Cool, thanks! > > No failures. But might it be wise to put this WARN_ON_ONCE() in, > let things go for a year or two, and complete the removal if it never > triggers? Or is the lack of forward progress warning enough? Hmm, what prevents a WARN_ON_ONCE() after the early exit of sync_sched_exp_online_cleanup() to hit? All it takes is for sync_sched_exp_online_cleanup() to execute between sync_exp_reset_tree() and __sync_rcu_exp_select_node_cpus() manage to send an IPI. But we can warn about the lack of forward progress after a few iterations of the retry_ipi label in __sync_rcu_exp_select_node_cpus(). > > > > > And if after do we know why? > > > > > > Here are some (possibly bogus) possibilities that came to mind: > > > > > > 1. There is some coming-online race that deprives the incoming > > > CPU of an IPI, but nevertheless marks that CPU as blocking the > > > current grace period. > > > > Arguably there is a tiny window between rcutree_report_cpu_starting() > > and set_cpu_online() that could make ->qsmaskinitnext visible before > > cpu_online() and therefore delay the IPI a bit. But I don't expect > > more than a jiffy to fill up the gap. And if that's relevant, note that > > only !PREEMPT_RCU is then "fixed" by sync_sched_exp_online_cleanup() here. > > Agreed. And I vaguely recall that there was some difference due to > preemptible RCU's ability to clean up at the next rcu_read_unlock(), > though more recently, possibly deferred. Perhaps at the time but today at least I can't find any. Thanks.