Patch "sched/fair: Move calculate of avg_load to a better location" has been added to the 5.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    sched/fair: Move calculate of avg_load to a better location

to the 5.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     sched-fair-move-calculate-of-avg_load-to-a-better-lo.patch
and it can be found in the queue-5.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 59704a62f12269ef7882e6c95a2e60b39f2adc84
Author: zgpeng <zgpeng.linux@xxxxxxxxx>
Date:   Wed Apr 6 17:57:05 2022 +0800

    sched/fair: Move calculate of avg_load to a better location
    
    [ Upstream commit 06354900787f25bf5be3c07a68e3cdbc5bf0fa69 ]
    
    In calculate_imbalance function, when the value of local->avg_load is
    greater than or equal to busiest->avg_load, the calculated sds->avg_load is
    not used. So this calculation can be placed in a more appropriate position.
    
    Signed-off-by: zgpeng <zgpeng@xxxxxxxxxxx>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
    Reviewed-by: Samuel Liao <samuelliao@xxxxxxxxxxx>
    Reviewed-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
    Link: https://lore.kernel.org/r/1649239025-10010-1-git-send-email-zgpeng@xxxxxxxxxxx
    Stable-dep-of: 91dcf1e8068e ("sched/fair: Fix imbalance overflow")
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bb70a7856277f..22139e97b2a8e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9342,8 +9342,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 		local->avg_load = (local->group_load * SCHED_CAPACITY_SCALE) /
 				  local->group_capacity;
 
-		sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
-				sds->total_capacity;
 		/*
 		 * If the local group is more loaded than the selected
 		 * busiest group don't try to pull any tasks.
@@ -9352,6 +9350,9 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 			env->imbalance = 0;
 			return;
 		}
+
+		sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
+				sds->total_capacity;
 	}
 
 	/*



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux