From: Tao Zhou <ouwen210@xxxxxxxxxxx> [ Upstream commit 6c8116c914b65be5e4d6f66d69c8142eb0648c22 ] In update_sg_wakeup_stats(), the comment says: Computing avg_load makes sense only when group is fully busy or overloaded. But, the code below this comment does not check like this. >From reading the code about avg_load in other functions, I confirm that avg_load should be calculated in fully busy or overloaded case. The comment is correct and the checking condition is wrong. So, change that condition. Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()") Signed-off-by: Tao Zhou <ouwen210@xxxxxxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Reviewed-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx> Acked-by: Mel Gorman <mgorman@xxxxxxx> Link: https://lkml.kernel.org/r/Message-ID: Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> --- kernel/sched/fair.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c1217bfe5e819..7f895d5139948 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8345,7 +8345,8 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd, * Computing avg_load makes sense only when group is fully busy or * overloaded */ - if (sgs->group_type < group_fully_busy) + if (sgs->group_type == group_fully_busy || + sgs->group_type == group_overloaded) sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) / sgs->group_capacity; } -- 2.20.1