On Wed, Nov 17, 2021 at 11:27:10AM -0800, Paul E. McKenney wrote: > On Wed, Nov 17, 2021 at 04:56:35PM +0100, Frederic Weisbecker wrote: > > In order to be able to (de-)offload any CPU using cpuset in the future, > > create a NOCB kthread for all possible CPUs. For now this is done only > > as long as the "rcu_nocb=" kernel parameter is passed to avoid > > the unnecessary overhead for most users. > > The "nohz_full=" kernel parameter would also cause these kthreads to > be created, correct? (Yeah, a nit, but...) > > And some fallout of my forgetfulness below. :-/ Ah right, that too! > > > > @@ -1268,7 +1265,7 @@ static void __init rcu_spawn_nocb_kthreads(void) > > int cpu; > > > > if (rcu_nocb_is_setup) { > > - for_each_online_cpu(cpu) > > + for_each_possible_cpu(cpu) > > Gah... I had forgotten. :-/ > > Some firmware lies about the OS instance's age. Other firmware lies > about the number of CPUs, sometimes claiming large multiples of the > actual number of CPUs. > > So this needs to stay "for_each_online_cpu". Otherwise, Paul Gortmaker > will once again be afflicted with hundreds of unnecessary rcuo kthreads. > > The later calls to rcutree_prepare_cpu() from rcutree_prepare_cpu() > will take care of any CPUs that first come online later. Ok that makes sense. > > > rcu_spawn_cpu_nocb_kthread(cpu); > > } > > } > > @@ -1303,7 +1300,7 @@ static void __init rcu_organize_nocb_kthreads(void) > > * Should the corresponding CPU come online in the future, then > > * we will spawn the needed set of rcu_nocb_kthread() kthreads. > > */ > > - for_each_cpu(cpu, rcu_nocb_mask) { > > + for_each_possible_cpu(cpu) { > > This needs to change, but to for_each_online_cpu() instead of > for_each_possible_cpu(). That handles the case where the boot CPU is > not initially offloaded, but where the sysadm later needs to offload it. I'm less sure about that one. This is called early from rcu_init_nohz(). Only the boot CPU should be online at that point. We really need to integrate all possible CPUs within the nocb groups. Thanks. > > > rdp = per_cpu_ptr(&rcu_data, cpu); > > if (rdp->cpu >= nl) { > > /* New GP kthread, set up for CBs & next GP. */ > > @@ -1327,7 +1324,8 @@ static void __init rcu_organize_nocb_kthreads(void) > > pr_cont(" %d", cpu); > > } > > rdp->nocb_gp_rdp = rdp_gp; > > - list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp); > > + if (cpumask_test_cpu(cpu, rcu_nocb_mask)) > > + list_add_tail(&rdp->nocb_entry_rdp, &rdp_gp->nocb_head_rdp); > > } > > if (gotnocbs && dump_tree) > > pr_cont("%s\n", gotnocbscbs ? "" : " (self only)"); > > -- > > 2.25.1 > >