This is a note to let you know that I've just added the patch titled sched/uclamp: Make asym_fits_capacity() use util_fits_cpu() to the 5.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: sched-uclamp-make-asym_fits_capacity-use-util_fits_cpu.patch and it can be found in the queue-5.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From stable-owner@xxxxxxxxxxxxxxx Wed Mar 8 17:17:10 2023 From: Qais Yousef <qyousef@xxxxxxxxxxx> Date: Wed, 8 Mar 2023 16:15:52 +0000 Subject: sched/uclamp: Make asym_fits_capacity() use util_fits_cpu() To: stable@xxxxxxxxxxxxxxx Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Vincent Guittot <vincent.guittot@xxxxxxxxxx>, Dietmar Eggemann <dietmar.eggemann@xxxxxxx>, Qais Yousef <qais.yousef@xxxxxxx> Message-ID: <20230308161558.2882972-5-qyousef@xxxxxxxxxxx> From: Qais Yousef <qais.yousef@xxxxxxx> commit a2e7f03ed28fce26c78b985f87913b6ce3accf9d upstream. Use the new util_fits_cpu() to ensure migration margin and capacity pressure are taken into account correctly when uclamp is being used otherwise we will fail to consider CPUs as fitting in scenarios where they should. s/asym_fits_capacity/asym_fits_cpu/ to better reflect what it does now. Fixes: b4c9c9f15649 ("sched/fair: Prefer prev cpu in asymmetric wakeup path") Signed-off-by: Qais Yousef <qais.yousef@xxxxxxx> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Link: https://lore.kernel.org/r/20220804143609.515789-6-qais.yousef@xxxxxxx (cherry picked from commit a2e7f03ed28fce26c78b985f87913b6ce3accf9d) [Conflict in kernel/sched/fair.c due different name of static key wrapper function and slightly different if condition block in one of the asym_fits_cpu() call sites] Signed-off-by: Qais Yousef (Google) <qyousef@xxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- kernel/sched/fair.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6375,10 +6375,13 @@ select_idle_capacity(struct task_struct return best_cpu; } -static inline bool asym_fits_capacity(unsigned long task_util, int cpu) +static inline bool asym_fits_cpu(unsigned long util, + unsigned long util_min, + unsigned long util_max, + int cpu) { if (static_branch_unlikely(&sched_asym_cpucapacity)) - return fits_capacity(task_util, capacity_of(cpu)); + return util_fits_cpu(util, util_min, util_max, cpu); return true; } @@ -6389,7 +6392,7 @@ static inline bool asym_fits_capacity(un static int select_idle_sibling(struct task_struct *p, int prev, int target) { struct sched_domain *sd; - unsigned long task_util; + unsigned long task_util, util_min, util_max; int i, recent_used_cpu; /* @@ -6398,11 +6401,13 @@ static int select_idle_sibling(struct ta */ if (static_branch_unlikely(&sched_asym_cpucapacity)) { sync_entity_load_avg(&p->se); - task_util = uclamp_task_util(p); + task_util = task_util_est(p); + util_min = uclamp_eff_value(p, UCLAMP_MIN); + util_max = uclamp_eff_value(p, UCLAMP_MAX); } if ((available_idle_cpu(target) || sched_idle_cpu(target)) && - asym_fits_capacity(task_util, target)) + asym_fits_cpu(task_util, util_min, util_max, target)) return target; /* @@ -6410,7 +6415,7 @@ static int select_idle_sibling(struct ta */ if (prev != target && cpus_share_cache(prev, target) && (available_idle_cpu(prev) || sched_idle_cpu(prev)) && - asym_fits_capacity(task_util, prev)) + asym_fits_cpu(task_util, util_min, util_max, prev)) return prev; /* @@ -6425,7 +6430,7 @@ static int select_idle_sibling(struct ta in_task() && prev == smp_processor_id() && this_rq()->nr_running <= 1 && - asym_fits_capacity(task_util, prev)) { + asym_fits_cpu(task_util, util_min, util_max, prev)) { return prev; } @@ -6436,7 +6441,7 @@ static int select_idle_sibling(struct ta cpus_share_cache(recent_used_cpu, target) && (available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) && cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) && - asym_fits_capacity(task_util, recent_used_cpu)) { + asym_fits_cpu(task_util, util_min, util_max, recent_used_cpu)) { /* * Replace recent_used_cpu with prev as it is a potential * candidate for the next wake: Patches currently in stable-queue which might be from stable-owner@xxxxxxxxxxxxxxx are queue-5.10/sched-fair-detect-capacity-inversion.patch queue-5.10/sched-uclamp-make-cpu_overutilized-use-util_fits_cpu.patch queue-5.10/sched-uclamp-make-select_idle_capacity-use-util_fits_cpu.patch queue-5.10/sched-uclamp-fix-fits_capacity-check-in-feec.patch queue-5.10/sched-fair-consider-capacity-inversion-in-util_fits_cpu.patch queue-5.10/sched-fair-fixes-for-capacity-inversion-detection.patch queue-5.10/sched-uclamp-make-asym_fits_capacity-use-util_fits_cpu.patch queue-5.10/sched-uclamp-fix-a-uninitialized-variable-warnings.patch queue-5.10/sched-uclamp-make-task_fits_capacity-use-util_fits_cpu.patch queue-5.10/sched-uclamp-cater-for-uclamp-in-find_energy_efficient_cpu-s-early-exit-condition.patch