On Wed, Aug 24, 2022 at 09:20:50AM -0700, Paul E. McKenney wrote: > On Wed, Aug 24, 2022 at 09:53:11PM +0800, Pingfan Liu wrote: > > On Tue, Aug 23, 2022 at 11:01 AM Paul E. McKenney <paulmck@xxxxxxxxxx> wrote: > > > On Tue, Aug 23, 2022 at 09:50:56AM +0800, Pingfan Liu wrote: > > > > On Sun, Aug 21, 2022 at 07:45:28PM -0700, Paul E. McKenney wrote: > > > > > On Mon, Aug 22, 2022 at 10:15:16AM +0800, Pingfan Liu wrote: > > > > > > In order to support parallel, rcu_state.n_online_cpus should be > > > > > > atomic_dec() > > > > > > > > > > > > Signed-off-by: Pingfan Liu <kernelfans@xxxxxxxxx> > > > > > > > > > > I have to ask... What testing have you subjected this patch to? > > > > > > > > > > > > > This patch subjects to [1]. The series aims to enable kexec-reboot in > > > > parallel on all cpu. As a result, the involved RCU part is expected to > > > > support parallel. > > > > > > I understand (and even sympathize with) the expectation. But results > > > sometimes diverge from expectations. There have been implicit assumptions > > > in RCU about only one CPU going offline at a time, and I am not sure > > > that all of them have been addressed. Concurrent CPU onlining has > > > been looked at recently here: > > > > > > https://docs.google.com/document/d/1jymsaCPQ1PUDcfjIKm0UIbVdrJAaGX-6cXrmcfm0PRU/edit?usp=sharing > > > > > > You did us atomic_dec() to make rcu_state.n_online_cpus decrementing be > > > atomic, which is good. Did you look through the rest of RCU's CPU-offline > > > code paths and related code paths? > > > > I went through those codes at a shallow level, especially at each > > cpuhp_step hook in the RCU system. > > And that is fine, at least as a first step. > > > But as you pointed out, there are implicit assumptions about only one > > CPU going offline at a time, I will chew the google doc which you > > share. Then I can come to a final result. > > Boqun Feng, Neeraj Upadhyay, Uladzislau Rezki, and I took a quick look, > and rcu_boost_kthread_setaffinity() seems to need some help. As it > stands, it appears that concurrent invocations of this function from the > CPU-offline path will cause all but the last outgoing CPU's bit to be > (incorrectly) set in the cpumask_var_t passed to set_cpus_allowed_ptr(). > > This should not be difficult to fix, for example, by maintaining a > separate per-leaf-rcu_node-structure bitmask of the concurrently outgoing > CPUs for that rcu_node structure. (Similar in structure to the > ->qsmask field.) > > There are probably more where that one came from. ;-) And here is one more from this week's session. The calls to tick_dep_set() and tick_dep_clear() use atomic operations, but they operate on a global variable. This means that the first call to rcutree_offline_cpu() would enable the tick and the first call to rcutree_dead_cpu() would disable the tick. This might be OK, but it is at the very least bad practice. There needs to be a counter mediating these calls. For more detail, please see the Google document: https://docs.google.com/document/d/1jymsaCPQ1PUDcfjIKm0UIbVdrJAaGX-6cXrmcfm0PRU/edit?usp=sharing Thanx, Paul > > > > [1]: https://lore.kernel.org/linux-arm-kernel/20220822021520.6996-3-kernelfans@xxxxxxxxx/T/#mf62352138d7b040fdb583ba66f8cd0ed1e145feb > > > > > > Perhaps I am more blind than usual today, but I am not seeing anything > > > in this patch describing the testing. At this point, I am thinking in > > > terms of making rcutorture test concurrent CPU offlining parallel > > > > Yes, testing results are more convincing in this area. > > > > After making clear the implicit assumptions, I will write some code to > > bridge my code and rcutorture test. Since the series is a little > > different from parallel cpu offlining. It happens after all devices > > are torn down, and there is no way to rollback. > > Very good, looking forward to seeing what you come up with! > > > > Thoughts? > > > > Need a deeper dive into this field. Hope to bring out something soon. > > Again, looking forward to seeing what you find! > > Thanx, Paul