This is a note to let you know that I've just added the patch titled sched/fair: Keep load_avg and load_sum synced to the 5.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: sched-fair-keep-load_avg-and-load_sum-synced.patch and it can be found in the queue-5.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 7c7ad626d9a0ff0a36c1e2a3cfbbc6a13828d5eb Mon Sep 17 00:00:00 2001 From: Vincent Guittot <vincent.guittot@xxxxxxxxxx> Date: Thu, 27 May 2021 14:29:15 +0200 Subject: sched/fair: Keep load_avg and load_sum synced From: Vincent Guittot <vincent.guittot@xxxxxxxxxx> commit 7c7ad626d9a0ff0a36c1e2a3cfbbc6a13828d5eb upstream. when removing a cfs_rq from the list we only check _sum value so we must ensure that _avg and _sum stay synced so load_sum can't be null whereas load_avg is not after propagating load in the cgroup hierarchy. Use load_avg to compute load_sum similarly to what is done for util_sum and runnable_sum. Fixes: 0e2d2aaaae52 ("sched/fair: Rewrite PELT migration propagation") Reported-by: Odin Ugedal <odin@xxxxxxx> Signed-off-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Reviewed-by: Odin Ugedal <odin@xxxxxxx> Link: https://lkml.kernel.org/r/20210527122916.27683-2-vincent.guittot@xxxxxxxxxx Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- kernel/sched/fair.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3501,10 +3501,9 @@ update_tg_cfs_runnable(struct cfs_rq *cf static inline void update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq) { - long delta_avg, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum; + long delta, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum; unsigned long load_avg; u64 load_sum = 0; - s64 delta_sum; u32 divider; if (!runnable_sum) @@ -3551,13 +3550,13 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq load_sum = (s64)se_weight(se) * runnable_sum; load_avg = div_s64(load_sum, divider); - delta_sum = load_sum - (s64)se_weight(se) * se->avg.load_sum; - delta_avg = load_avg - se->avg.load_avg; + delta = load_avg - se->avg.load_avg; se->avg.load_sum = runnable_sum; se->avg.load_avg = load_avg; - add_positive(&cfs_rq->avg.load_avg, delta_avg); - add_positive(&cfs_rq->avg.load_sum, delta_sum); + + add_positive(&cfs_rq->avg.load_avg, delta); + cfs_rq->avg.load_sum = cfs_rq->avg.load_avg * divider; } static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum) Patches currently in stable-queue which might be from vincent.guittot@xxxxxxxxxx are queue-5.10/sched-fair-fix-util_est-util_avg_unchanged-handling.patch queue-5.10/sched-fair-keep-load_avg-and-load_sum-synced.patch queue-5.10/sched-fair-make-sure-to-update-tg-contrib-for-blocked-load.patch