Sorry guys, I seem to have messed this up :/ Valentin pointed me out that I missed v3 and v4 of these patches; v3 got lost in the x-mas pile and v4 was actually on my todo list for this week, but I'd forgotten I'd already queued v2. I'll go queue delta patches. On Mon, Jan 21, 2019 at 03:33:53AM -0800, tip-bot for Vincent Guittot wrote: > index 50aa2aba69bd..2ccd6e093326 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -8190,6 +8190,12 @@ static inline void update_sg_lb_stats(struct lb_env *env, > /* Adjust by relative CPU capacity of the group */ > sgs->group_capacity = group->sgc->capacity; > sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity; > + /* > + * Prevent division rounding to make the computation of imbalance > + * slightly less than original value and to prevent the rq to be then > + * selected as busiest queue: > + */ > + sgs->avg_load += 1; > > if (sgs->sum_nr_running) > sgs->load_per_task = sgs->sum_weighted_load / sgs->sum_nr_running;