The patch titled sched: remove staggering of load balancing has been added to the -mm tree. Its filename is sched-remove-staggering-of-load-balancing.patch See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find out what to do about this ------------------------------------------------------ Subject: sched: remove staggering of load balancing From: Christoph Lameter <clameter@xxxxxxx> Timer interrupts already are staggered. We do not need an additional layer of time staggering for short load balancing actions that take a reasonably small portion of the time slice. For load balancing on large sched_domains we will add a serialization later that avoids concurrent load balance operations and thus has the same effect as load staggering. Signed-off-by: Christoph Lameter <clameter@xxxxxxx> Cc: Peter Williams <pwil3058@xxxxxxxxxxxxxx> Cc: Nick Piggin <nickpiggin@xxxxxxxxxxxx> Cc: Christoph Lameter <clameter@xxxxxxx> Cc: "Siddha, Suresh B" <suresh.b.siddha@xxxxxxxxx> Cc: "Chen, Kenneth W" <kenneth.w.chen@xxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxx> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxx> --- kernel/sched.c | 10 ++-------- 1 files changed, 2 insertions(+), 8 deletions(-) diff -puN kernel/sched.c~sched-remove-staggering-of-load-balancing kernel/sched.c --- a/kernel/sched.c~sched-remove-staggering-of-load-balancing +++ a/kernel/sched.c @@ -2841,16 +2841,10 @@ static void active_load_balance(struct r * Balancing parameters are set up in arch_init_sched_domains. */ -/* Don't have all balancing operations going off at once: */ -static inline unsigned long cpu_offset(int cpu) -{ - return jiffies + cpu * HZ / NR_CPUS; -} - static void rebalance_tick(int this_cpu, struct rq *this_rq, enum idle_type idle) { - unsigned long this_load, interval, j = cpu_offset(this_cpu); + unsigned long this_load, interval; struct sched_domain *sd; int i, scale; @@ -2885,7 +2879,7 @@ rebalance_tick(int this_cpu, struct rq * if (unlikely(!interval)) interval = 1; - if (j - sd->last_balance >= interval) { + if (jiffies - sd->last_balance >= interval) { if (load_balance(this_cpu, this_rq, sd, idle)) { /* * We've pulled tasks over so either we're no _ Patches currently in -mm which might be from clameter@xxxxxxx are memory-page-alloc-minor-cleanups.patch memory-page-alloc-minor-cleanups-fix.patch get-rid-of-zone_table.patch deal-with-cases-of-zone_dma-meaning-the-first-zone.patch get-rid-of-zone_table-fix-3.patch introduce-config_zone_dma.patch optional-zone_dma-in-the-vm.patch optional-zone_dma-in-the-vm-no-gfp_dma-check-in-the-slab-if-no-config_zone_dma-is-set.patch optional-zone_dma-in-the-vm-no-gfp_dma-check-in-the-slab-if-no-config_zone_dma-is-set-reduce-config_zone_dma-ifdefs.patch optional-zone_dma-for-ia64.patch remove-zone_dma-remains-from-parisc.patch remove-zone_dma-remains-from-sh-sh64.patch set-config_zone_dma-for-arches-with-generic_isa_dma.patch zoneid-fix-up-calculations-for-zoneid_pgshift.patch radix-tree-rcu-lockless-readside.patch sched-domain-move-sched-group-allocations-to-percpu-area.patch sched-avoid-taking-rq-lock-in-wake_priority_sleeper.patch sched-remove-staggering-of-load-balancing.patch sched-disable-interrupts-for-locking-in-load_balance.patch sched-extract-load-calculation-from-rebalance_tick.patch sched-move-idle-status-calculation-into-rebalance_tick.patch sched-use-softirq-for-load-balancing.patch sched-call-tasklet-less-frequently.patch sched-add-option-to-serialize-load-balancing.patch mm-only-sched-add-a-few-scheduler-event-counters.patch zvc-support-nr_slab_reclaimable--nr_slab_unreclaimable-swap_prefetch.patch reduce-max_nr_zones-swap_prefetch-remove-incorrect-use-of-zone_highmem.patch numa-add-zone_to_nid-function-swap_prefetch.patch readahead-state-based-method-aging-accounting.patch - To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html