This is a note to let you know that I've just added the patch titled sched/fair: Simplify wake_affine() for the single socket case to the 4.12-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: sched-fair-simplify-wake_affine-for-the-single-socket-case.patch and it can be found in the queue-4.12 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 7d894e6e34a5cdd12309c7e4a3f830277ad4b7bf Mon Sep 17 00:00:00 2001 From: Rik van Riel <riel@xxxxxxxxxx> Date: Fri, 23 Jun 2017 12:55:28 -0400 Subject: sched/fair: Simplify wake_affine() for the single socket case From: Rik van Riel <riel@xxxxxxxxxx> commit 7d894e6e34a5cdd12309c7e4a3f830277ad4b7bf upstream. Then 'this_cpu' and 'prev_cpu' are in the same socket, select_idle_sibling() will do its thing regardless of the return value of wake_affine(). Just return true and don't look at all the other things. Signed-off-by: Rik van Riel <riel@xxxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Mike Galbraith <efault@xxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: jhladky@xxxxxxxxxx Cc: linux-kernel@xxxxxxxxxxxxxxx Link: http://lkml.kernel.org/r/20170623165530.22514-3-riel@xxxxxxxxxx Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx> Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- kernel/sched/fair.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5399,6 +5399,13 @@ static int wake_affine(struct sched_doma this_load = target_load(this_cpu, idx); /* + * Common case: CPUs are in the same socket, and select_idle_sibling() + * will do its thing regardless of what we return: + */ + if (cpus_share_cache(prev_cpu, this_cpu)) + return true; + + /* * If sync wakeup then subtract the (maximum possible) * effect of the currently running task from the load * of the current CPU: @@ -5986,11 +5993,15 @@ select_task_rq_fair(struct task_struct * if (affine_sd) { sd = NULL; /* Prefer wake_affine over balance flags */ - if (cpu != prev_cpu && wake_affine(affine_sd, p, prev_cpu, sync)) + if (cpu == prev_cpu) + goto pick_cpu; + + if (wake_affine(affine_sd, p, prev_cpu, sync)) new_cpu = cpu; } if (!sd) { + pick_cpu: if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */ new_cpu = select_idle_sibling(p, prev_cpu, new_cpu); Patches currently in stable-queue which might be from riel@xxxxxxxxxx are queue-4.12/sched-numa-hide-numa_wake_affine-from-up-build.patch queue-4.12/sched-numa-use-down_read_trylock-for-the-mmap_sem.patch queue-4.12/sched-core-implement-new-approach-to-scale-select_idle_cpu.patch queue-4.12/sched-fair-remove-effective_load.patch queue-4.12/sched-numa-implement-numa-node-level-wake_affine.patch queue-4.12/sched-fair-simplify-wake_affine-for-the-single-socket-case.patch queue-4.12/sched-fair-cpumask-export-for_each_cpu_wrap.patch queue-4.12/sched-numa-override-part-of-migrate_degrades_locality-when-idle-balancing.patch