- sched-improve-smpnice-load-balancing-when-load-per-task.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled

     sched: improve smpnice load balancing when load per task imbalanced

has been removed from the -mm tree.  Its filename is

     sched-improve-smpnice-load-balancing-when-load-per-task.patch

This patch was dropped because it was folded into sched-implement-smpnice.patch

------------------------------------------------------
Subject: sched: improve smpnice load balancing when load per task imbalanced
From: Peter Williams <pwil3058@xxxxxxxxxxxxxx>


Problem:

2 CPU system: if the cpu-0 has two high priority and cpu-1 has one normal
priority task, how can the current code detect this imbalance because
imbalance will be always < busiest_load_per_task and max_load - this_load
will be < 2 * busiest_load_per_task and pwr_move will be <= pwr_now.

Solution:

Modify the assessment of small imbalances to take into account the relative
sizes of busiest_load_per_task and this_load_per_task.  This is exploiting
the fact that if the difference between the loads is greater than
busiest_load_per_task and busiest_load_per_task is greater than
this_load_per_task then moving busiest_load_per_task worth of load from
busiest to this will be an improvement in the distribution of weighted
load.

Note: This patch makes no change to load balancing in the case where all
tasks are nice==0.

Signed-off-by: Peter Williams <pwil3058@xxxxxxxxxxxxxx>
Cc: "Chen, Kenneth W" <kenneth.w.chen@xxxxxxxxx>
Cc: "Siddha, Suresh B" <suresh.b.siddha@xxxxxxxxx>
Acked-by: Ingo Molnar <mingo@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxx>
---

 kernel/sched.c |   14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff -puN kernel/sched.c~sched-improve-smpnice-load-balancing-when-load-per-task kernel/sched.c
--- devel/kernel/sched.c~sched-improve-smpnice-load-balancing-when-load-per-task	2006-06-09 15:22:29.000000000 -0700
+++ devel-akpm/kernel/sched.c	2006-06-09 15:22:29.000000000 -0700
@@ -2197,8 +2197,16 @@ find_busiest_group(struct sched_domain *
 	if (*imbalance < busiest_load_per_task) {
 		unsigned long pwr_now = 0, pwr_move = 0;
 		unsigned long tmp;
+		unsigned int imbn = 2;
 
-		if (max_load - this_load >= busiest_load_per_task*2) {
+		if (this_nr_running) {
+			this_load_per_task /= this_nr_running;
+			if (busiest_load_per_task > this_load_per_task)
+				imbn = 1;
+		} else
+			this_load_per_task = SCHED_LOAD_SCALE;
+
+		if (max_load - this_load >= busiest_load_per_task * imbn) {
 			*imbalance = busiest_load_per_task;
 			return busiest;
 		}
@@ -2211,10 +2219,6 @@ find_busiest_group(struct sched_domain *
 
 		pwr_now += busiest->cpu_power *
 			min(busiest_load_per_task, max_load);
-		if (this_nr_running)
-			this_load_per_task /= this_nr_running;
-		else
-			this_load_per_task = SCHED_LOAD_SCALE;
 		pwr_now += this->cpu_power *
 			min(this_load_per_task, this_load);
 		pwr_now /= SCHED_LOAD_SCALE;
_

Patches currently in -mm which might be from pwil3058@xxxxxxxxxxxxxx are

origin.patch
sched-implement-smpnice.patch
sched-improve-smpnice-load-balancing-when-load-per-task.patch
sched-modify-move_tasks-to-improve-load-balancing-outcomes.patch
sched-avoid-unnecessarily-moving-highest-priority-task-move_tasks.patch
sched-avoid-unnecessarily-moving-highest-priority-task-move_tasks-fix-2.patch
sched-uninline-task_rq_lock.patch
sched-add-above-background-load-function.patch
pi-futex-scheduler-support-for-pi.patch
pi-futex-rt-mutex-tester-fix.patch

-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Kernel Newbies FAQ]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux