Commit-ID: 8d2ef665a9bbee36a2a9d855eb4e97d87953fee5 Gitweb: https://git.kernel.org/tip/8d2ef665a9bbee36a2a9d855eb4e97d87953fee5 Author: Vincent Guittot <vincent.guittot@xxxxxxxxxx> AuthorDate: Fri, 14 Dec 2018 17:01:55 +0100 Committer: Ingo Molnar <mingo@xxxxxxxxxx> CommitDate: Mon, 21 Jan 2019 11:27:50 +0100 sched/fair: Fix rounding bug for asym packing When check_asym_packing() is triggered, the imbalance is set to : busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE busiest_stat.avg_load also comes from a division and the final rounding can make imbalance slightly lower than the weighted load of the cfs_rq. But this is enough to skip the rq in find_busiest_queue and prevents asym migration to happen. Add 1 to the avg_load to make sure that the targeted CPU will not be skipped unexpectidly. Signed-off-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: valentin.schneider@xxxxxxx Cc: linux-kernel@xxxxxxxxxxxxxxx Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> --- kernel/sched/fair.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 50aa2aba69bd..2ccd6e093326 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8190,6 +8190,12 @@ static inline void update_sg_lb_stats(struct lb_env *env, /* Adjust by relative CPU capacity of the group */ sgs->group_capacity = group->sgc->capacity; sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity; + /* + * Prevent division rounding to make the computation of imbalance + * slightly less than original value and to prevent the rq to be then + * selected as busiest queue: + */ + sgs->avg_load += 1; if (sgs->sum_nr_running) sgs->load_per_task = sgs->sum_weighted_load / sgs->sum_nr_running;
![]() |