Patch "sched/fair: Make select_idle_cpu() more aggressive" has been added to the 4.9-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    sched/fair: Make select_idle_cpu() more aggressive

to the 4.9-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     sched-fair-make-select_idle_cpu-more-aggressive.patch
and it can be found in the queue-4.9 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.


>From foo@baz Tue Dec 12 13:26:17 CET 2017
From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Date: Wed, 1 Mar 2017 11:24:35 +0100
Subject: sched/fair: Make select_idle_cpu() more aggressive

From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>


[ Upstream commit 4c77b18cf8b7ab37c7d5737b4609010d2ceec5f0 ]

Kitsunyan reported desktop latency issues on his Celeron 887 because
of commit:

  1b568f0aabf2 ("sched/core: Optimize SCHED_SMT")

... even though his CPU doesn't do SMT.

The effect of running the SMT code on a !SMT part is basically a more
aggressive select_idle_cpu(). Removing the avg condition fixed things
for him.

I also know FB likes this test gone, even though other workloads like
having it.

For now, take it out by default, until we get a better idea.

Reported-by: kitsunyan <kitsunyan@xxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Cc: Chris Mason <clm@xxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Mike Galbraith <efault@xxxxxx>
Cc: Mike Galbraith <umgwanakikbuti@xxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: linux-kernel@xxxxxxxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
Signed-off-by: Sasha Levin <alexander.levin@xxxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
 kernel/sched/fair.c     |    2 +-
 kernel/sched/features.h |    5 +++++
 2 files changed, 6 insertions(+), 1 deletion(-)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5451,7 +5451,7 @@ static int select_idle_cpu(struct task_s
 	 * Due to large variance we need a large fuzz factor; hackbench in
 	 * particularly is sensitive here.
 	 */
-	if ((avg_idle / 512) < avg_cost)
+	if (sched_feat(SIS_AVG_CPU) && (avg_idle / 512) < avg_cost)
 		return -1;
 
 	time = local_clock();
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -51,6 +51,11 @@ SCHED_FEAT(NONTASK_CAPACITY, true)
  */
 SCHED_FEAT(TTWU_QUEUE, true)
 
+/*
+ * When doing wakeups, attempt to limit superfluous scans of the LLC domain.
+ */
+SCHED_FEAT(SIS_AVG_CPU, false)
+
 #ifdef HAVE_RT_PUSH_IPI
 /*
  * In order to avoid a thundering herd attack of CPUs that are


Patches currently in stable-queue which might be from peterz@xxxxxxxxxxxxx are

queue-4.9/smp-hotplug-move-step-cpuhp_ap_smpcfd_dying-to-the-correct-place.patch
queue-4.9/efi-esrt-use-memunmap-instead-of-kfree-to-free-the-remapping.patch
queue-4.9/x86-hpet-prevent-might-sleep-splat-on-resume.patch
queue-4.9/efi-move-some-sysfs-files-to-be-read-only-by-root.patch
queue-4.9/x86-platform-uv-bau-fix-hub-errors-by-remove-initial-write-to-sw-ack-register.patch
queue-4.9/x86-mpx-selftests-fix-up-weird-arrays.patch
queue-4.9/blk-mq-initialize-mq-kobjects-in-blk_mq_init_allocated_queue.patch
queue-4.9/x86-selftests-add-clobbers-for-int80-on-x86_64.patch
queue-4.9/jump_label-invoke-jump_label_test-via-early_initcall.patch
queue-4.9/sched-fair-make-select_idle_cpu-more-aggressive.patch



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]