On Wed, May 01, 2024 at 05:09:45AM -1000, Tejun Heo wrote: > RT, DL, thermal and irq load and utilization metrics need to be decayed and > updated periodically and before consumption to keep the numbers reasonable. > This is currently done from __update_blocked_others() as a part of the fair > class load balance path. Let's factor it out to update_other_load_avgs(). > Pure refactor. No functional changes. > > This will be used by the new BPF extensible scheduling class to ensure that > the above metrics are properly maintained. > > Signed-off-by: Tejun Heo <tj@xxxxxxxxxx> > Reviewed-by: David Vernet <dvernet@xxxxxxxx> > --- > kernel/sched/core.c | 19 +++++++++++++++++++ > kernel/sched/fair.c | 16 +++------------- > kernel/sched/sched.h | 3 +++ > 3 files changed, 25 insertions(+), 13 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 90b505fbb488..7542a39f1fde 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -7486,6 +7486,25 @@ int sched_core_idle_cpu(int cpu) > #endif > > #ifdef CONFIG_SMP > +/* > + * Load avg and utiliztion metrics need to be updated periodically and before > + * consumption. This function updates the metrics for all subsystems except for > + * the fair class. @rq must be locked and have its clock updated. > + */ > +bool update_other_load_avgs(struct rq *rq) > +{ > + u64 now = rq_clock_pelt(rq); > + const struct sched_class *curr_class = rq->curr->sched_class; > + unsigned long thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq)); > + > + lockdep_assert_rq_held(rq); > + > + return update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) | > + update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) | > + update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure) | > + update_irq_load_avg(rq, 0); > +} Yeah, but you then ignore the return value and don't call into cpufreq. Vincent, what would be the right thing to do here?