This fixes an issue where fairness is decreased since cfs_rq's can end up not being decayed properly. For two sibling control groups with the same priority, this can often lead to a load ratio of 99/1 (!!). This happen because when a cfs_rq is throttled, all the descendant cfs_rq's will be removed from the leaf list. When they initial cfs_rq is unthrottled, it will currently only re add descendant cfs_rq's if they have one or more entities enqueued. This is not a perfect heuristic. Insted, we insert all cfs_rq's that contain one or more enqueued entities, or contributes to the load of the task group. Can often lead to sutiations like this for equally weighted control groups: $ ps u -C stress USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 10009 88.8 0.0 3676 100 pts/1 R+ 11:04 0:13 stress --cpu 1 root 10023 3.0 0.0 3676 104 pts/1 R+ 11:04 0:00 stress --cpu 1 Fixes: 31bc6aeaab1d ("sched/fair: Optimize update_blocked_averages()") Signed-off-by: Odin Ugedal <odin@xxxxxxx> --- Original thread: https://lore.kernel.org/lkml/20210518125202.78658-3-odin@xxxxxxx/ Changes since v1: - Replaced cfs_rq field with using tg_load_avg_contrib - Went from 3 to 1 pathces; one is merged and one is replaced by a new patchset. kernel/sched/fair.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 794c2cb945f8..0f1b39ca5ca8 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4719,8 +4719,11 @@ static int tg_unthrottle_up(struct task_group *tg, void *data) cfs_rq->throttled_clock_task_time += rq_clock_task(rq) - cfs_rq->throttled_clock_task; - /* Add cfs_rq with already running entity in the list */ - if (cfs_rq->nr_running >= 1) + /* + * Add cfs_rq with tg load avg contribution or one or more + * already running entities to the list + */ + if (cfs_rq->tg_load_avg_contrib || cfs_rq->nr_running) list_add_leaf_cfs_rq(cfs_rq); } -- 2.31.1