On Wed, Oct 14, 2015 at 11:59 AM, Christoph Lameter <cl@xxxxxxxxx> wrote: > On Wed, 14 Oct 2015, Linus Torvalds wrote: > >> And "schedule_delayed_work()" uses WORK_CPU_UNBOUND. > > Uhhh. Someone changed that? It always did. This is from 2007: int fastcall schedule_delayed_work(struct delayed_work *dwork, unsigned long delay) { timer_stats_timer_set_start_info(&dwork->timer); return queue_delayed_work(keventd_wq, dwork, delay); } ... int fastcall queue_delayed_work(struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay) { timer_stats_timer_set_start_info(&dwork->timer); if (delay == 0) return queue_work(wq, &dwork->work); return queue_delayed_work_on(-1, wq, dwork, delay); } ... int queue_delayed_work_on(int cpu, struct workqueue_struct *wq, struct delayed_work *dwork, unsigned long delay) { .... timer->function = delayed_work_timer_fn; if (unlikely(cpu >= 0)) add_timer_on(timer, cpu); else add_timer(timer); } ... void delayed_work_timer_fn(unsigned long __data) { int cpu = smp_processor_id(); ... __queue_work(per_cpu_ptr(wq->cpu_wq, cpu), &dwork->work); } so notice how it always just used "add_timer()", and then queued it on whatever cpu workqueue the timer ran on. Now, 99.9% of the time, the timer is just added to the current CPU queues, so yes, in practice it ended up running on the same CPU almost all the time. There are exceptions (timers can get moved around, and active timers end up staying on the CPU they were scheduled on when they get updated, rather than get moved to the current cpu), but they are hard to hit. But the code clearly didn't do that "same CPU" intentionally, and just going by naming of things I would also say that it was never implied. Linus -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>