On Mon, 16 Mar 2009, Arun R Bharadwaj wrote: > @@ -627,6 +628,16 @@ __mod_timer(struct timer_list *timer, un > > new_base = __get_cpu_var(tvec_bases); > > + current_cpu = smp_processor_id(); > + preferred_cpu = get_nohz_load_balancer(); > + if (get_sysctl_timer_migration() && idle_cpu(current_cpu) && > + !pinned && preferred_cpu != -1) { > + new_base = per_cpu(tvec_bases, preferred_cpu); > + timer_set_base(timer, new_base); > + timer->expires = expires; > + internal_add_timer(new_base, timer); > + goto out_unlock; > + } Err. This change breaks the timer->base logic. Why can't it just select the base and use the existing code ? > @@ -198,8 +200,16 @@ switch_hrtimer_base(struct hrtimer *time > { > struct hrtimer_clock_base *new_base; > struct hrtimer_cpu_base *new_cpu_base; > + int current_cpu, preferred_cpu; > + > + current_cpu = smp_processor_id(); > + preferred_cpu = get_nohz_load_balancer(); > + if (get_sysctl_timer_migration() && !pinned && preferred_cpu != -1 > + && idle_cpu(current_cpu)) > + new_cpu_base = &per_cpu(hrtimer_bases, preferred_cpu); > + else > + new_cpu_base = &__get_cpu_var(hrtimer_bases); > > - new_cpu_base = &__get_cpu_var(hrtimer_bases); > new_base = &new_cpu_base->clock_base[base->index]; Hmm. This can lead to high latencies when you enqueue the timer on the other CPU simply because we can not reprogram the timer hardware on the other CPU in the CONFIG_HIGH_RES=y case. Let's assume we are on CPU0 and try to enqueue the timer on CPU1, where the next timer expiry is 5ms away. The timer which we enqueue is due in 500us. So you introduce 4.5ms latency. Thanks, tglx _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/linux-pm