+ sched-fix-newly-idle-load-balance-in-case-of-smt.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     sched: fix newly idle load balance in case of SMT
has been added to the -mm tree.  Its filename is
     sched-fix-newly-idle-load-balance-in-case-of-smt.patch

*** Remember to use Documentation/SubmitChecklist when testing your code ***

See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find
out what to do about this

------------------------------------------------------
Subject: sched: fix newly idle load balance in case of SMT
From: "Siddha, Suresh B" <suresh.b.siddha@xxxxxxxxx>

In the presence of SMT, newly idle balance was never happening for multi-core
and SMP domains(even when both the logical siblings are idle).

If thread 0 is already idle and when thread 1 is about to go to idle, newly
idle load balance always think that one of the threads is not idle and skips
doing the newly idle load balance for multi-core and SMP domains.

This is because of the idle_cpu() macro, which checks if the current process
on a cpu is an idle process.  But this is not the case for the thread doing
the load_balance_newidle().

Fix this by using runqueue's nr_running field instead of idle_cpu().  And also
skip the logic of 'only one idle cpu in the group will be doing load
balancing' during newly idle case.

Signed-off-by: Suresh Siddha <suresh.b.siddha@xxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxx>
Cc: Nick Piggin <nickpiggin@xxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 kernel/sched.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff -puN kernel/sched.c~sched-fix-newly-idle-load-balance-in-case-of-smt kernel/sched.c
--- a/kernel/sched.c~sched-fix-newly-idle-load-balance-in-case-of-smt
+++ a/kernel/sched.c
@@ -2235,7 +2235,7 @@ find_busiest_group(struct sched_domain *
 
 			rq = cpu_rq(i);
 
-			if (*sd_idle && !idle_cpu(i))
+			if (*sd_idle && rq->nr_running)
 				*sd_idle = 0;
 
 			/* Bias balancing toward cpus of our domain */
@@ -2257,9 +2257,11 @@ find_busiest_group(struct sched_domain *
 		/*
 		 * First idle cpu or the first cpu(busiest) in this sched group
 		 * is eligible for doing load balancing at this and above
-		 * domains.
+		 * domains. In the newly idle case, we will allow all the cpu's
+		 * to do the newly idle load balance.
 		 */
-		if (local_group && balance_cpu != this_cpu && balance) {
+		if (idle != CPU_NEWLY_IDLE && local_group &&
+		    balance_cpu != this_cpu && balance) {
 			*balance = 0;
 			goto ret;
 		}
_

Patches currently in -mm which might be from suresh.b.siddha@xxxxxxxxx are

define-new-percpu-interface-for-shared-data-version-4.patch
use-the-new-percpu-interface-for-shared-data-version-4.patch
sched-fix-newly-idle-load-balance-in-case-of-smt.patch
sched-fix-the-all-pinned-logic-in-load_balance_newidle.patch
x86_64-irq-check-remote-irr-bit-before-migrating-level-triggered-irq-v3.patch
intel-iommu-dmar-detection-and-parsing-logic.patch
intel-iommu-pci-generic-helper-function.patch
intel-iommu-clflush_cache_range-now-takes-size-param.patch
intel-iommu-iova-allocation-and-management-routines.patch
intel-iommu-intel-iommu-driver.patch
intel-iommu-avoid-memory-allocation-failures-in-dma-map-api-calls.patch
intel-iommu-intel-iommu-cmdline-option-forcedac.patch
intel-iommu-dmar-fault-handling-support.patch
intel-iommu-iommu-gfx-workaround.patch
intel-iommu-iommu-floppy-workaround.patch

-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Kernel Newbies FAQ]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux