On Fri, Jul 05, 2024 at 12:18:22AM +0200, Frederic Weisbecker wrote: > Le Wed, Jul 03, 2024 at 10:25:57AM -0700, Paul E. McKenney a écrit : > > On Wed, Jul 03, 2024 at 06:21:01PM +0200, Frederic Weisbecker wrote: > > > Le Tue, Jun 04, 2024 at 03:23:52PM -0700, Paul E. McKenney a écrit : > > > > If a CPU is running either a userspace application or a guest OS in > > > > nohz_full mode, it is possible for a system call to occur just as an > > > > RCU grace period is starting. If that CPU also has the scheduling-clock > > > > tick enabled for any reason (such as a second runnable task), and if the > > > > system was booted with rcutree.use_softirq=0, then RCU can add insult to > > > > injury by awakening that CPU's rcuc kthread, resulting in yet another > > > > task and yet more OS jitter due to switching to that task, running it, > > > > and switching back. > > > > > > > > In addition, in the common case where that system call is not of > > > > excessively long duration, awakening the rcuc task is pointless. > > > > This pointlessness is due to the fact that the CPU will enter an extended > > > > quiescent state upon returning to the userspace application or guest OS. > > > > In this case, the rcuc kthread cannot do anything that the main RCU > > > > grace-period kthread cannot do on its behalf, at least if it is given > > > > a few additional milliseconds (for example, given the time duration > > > > specified by rcutree.jiffies_till_first_fqs, give or take scheduling > > > > delays). > > > > > > > > This commit therefore adds a rcutree.nocb_patience_delay kernel boot > > > > parameter that specifies the grace period age (in milliseconds) > > > > before which RCU will refrain from awakening the rcuc kthread. > > > > Preliminary experiementation suggests a value of 1000, that is, > > > > one second. Increasing rcutree.nocb_patience_delay will increase > > > > grace-period latency and in turn increase memory footprint, so systems > > > > with constrained memory might choose a smaller value. Systems with > > > > less-aggressive OS-jitter requirements might choose the default value > > > > of zero, which keeps the traditional immediate-wakeup behavior, thus > > > > avoiding increases in grace-period latency. > > > > > > > > [ paulmck: Apply Leonardo Bras feedback. ] > > > > > > > > Link: https://lore.kernel.org/all/20240328171949.743211-1-leobras@xxxxxxxxxx/ > > > > > > > > Reported-by: Leonardo Bras <leobras@xxxxxxxxxx> > > > > Suggested-by: Leonardo Bras <leobras@xxxxxxxxxx> > > > > Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx> > > > > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx> > > > > Reviewed-by: Leonardo Bras <leobras@xxxxxxxxxx> > > > > --- > > > > Documentation/admin-guide/kernel-parameters.txt | 8 ++++++++ > > > > kernel/rcu/tree.c | 10 ++++++++-- > > > > kernel/rcu/tree_plugin.h | 10 ++++++++++ > > > > 3 files changed, 26 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt > > > > index 500cfa7762257..2d4a512cf1fc6 100644 > > > > --- a/Documentation/admin-guide/kernel-parameters.txt > > > > +++ b/Documentation/admin-guide/kernel-parameters.txt > > > > @@ -5018,6 +5018,14 @@ > > > > the ->nocb_bypass queue. The definition of "too > > > > many" is supplied by this kernel boot parameter. > > > > > > > > + rcutree.nocb_patience_delay= [KNL] > > > > + On callback-offloaded (rcu_nocbs) CPUs, avoid > > > > + disturbing RCU unless the grace period has > > > > + reached the specified age in milliseconds. > > > > + Defaults to zero. Large values will be capped > > > > + at five seconds. All values will be rounded down > > > > + to the nearest value representable by jiffies. > > > > + > > > > rcutree.qhimark= [KNL] > > > > Set threshold of queued RCU callbacks beyond which > > > > batch limiting is disabled. > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > > > index 35bf4a3736765..408b020c9501f 100644 > > > > --- a/kernel/rcu/tree.c > > > > +++ b/kernel/rcu/tree.c > > > > @@ -176,6 +176,9 @@ static int gp_init_delay; > > > > module_param(gp_init_delay, int, 0444); > > > > static int gp_cleanup_delay; > > > > module_param(gp_cleanup_delay, int, 0444); > > > > +static int nocb_patience_delay; > > > > +module_param(nocb_patience_delay, int, 0444); > > > > +static int nocb_patience_delay_jiffies; > > > > > > > > // Add delay to rcu_read_unlock() for strict grace periods. > > > > static int rcu_unlock_delay; > > > > @@ -4344,11 +4347,14 @@ static int rcu_pending(int user) > > > > return 1; > > > > > > > > /* Is this a nohz_full CPU in userspace or idle? (Ignore RCU if so.) */ > > > > - if ((user || rcu_is_cpu_rrupt_from_idle()) && rcu_nohz_full_cpu()) > > > > + gp_in_progress = rcu_gp_in_progress(); > > > > + if ((user || rcu_is_cpu_rrupt_from_idle() || > > > > + (gp_in_progress && > > > > + time_before(jiffies, READ_ONCE(rcu_state.gp_start) + nocb_patience_delay_jiffies))) && > > > > + rcu_nohz_full_cpu()) > > > > > > The rcu_nohz_full_cpu() test should go before anything in order to benefit from > > > the static key in tick_nohz_full_cpu(). > > > > That has had the wrong order since forever. ;-) > > > > But good to fix. I will queue a separate patch for Neeraj to consider > > for the v6.12 merge window. > > > > > And since it only applies to nohz_full, should it be called > > > nohz_full_patience_delay ? > > > > > > Or do we want to generalize it to all nocb uses > > > (which means only rely on rcu_is_cpu_rrupt_from_idle() if not nohz_full). Not > > > sure if that would make sense... > > > > I don't believe that this makes sense except for nohz_full guest OSes. > > > > I am good with nohz_full_patience_delay_jiffies. (Or did you really > > want to drop "_jiffies", and if so, did you also want some other units?) And this was me being confused. The internal variable ends in _jiffies, but the kernel boot parameter does not, just as before. > > Last chance to object to the name. ;-) > > A bit long but I don't have a better proposal :-) We could make a longer one so that this one would look good by comparison? > > And next time we go through the patches a bit longer before the merge > > window! > > My bad, I overlooked that one when it was posted. Only fair, I should have gotten to your seconds series sooner as well. Thanx, Paul