On Wed, 2016-02-03 at 12:06 -0500, Tejun Heo wrote: > On Wed, Feb 03, 2016 at 06:01:53PM +0100, Mike Galbraith wrote: > > Hm, so it's ok to queue work to an offline CPU? What happens if it > > doesn't come back for an eternity or two? > > Right now, it just loses affinity.... WRT affinity... Somebody somewhere queues a delayed work, a timer is started on CPUX, work is targeted at CPUX. Now wash/rinse/repeat mod_delayed_work() along with migrations. Should __queue_delayed_work() not refrain from altering dwork->cpu once set? I'm also wondering why 22b886dd only applies to kernels >= 4.2. <quote> Regardless of the previous CPU a timer was on, add_timer_on() currently simply sets timer->flags to the new CPU. As the caller must be seeing the timer as idle, this is locally fine, but the timer leaving the old base while unlocked can lead to race conditions as follows. Let's say timer was on cpu 0. cpu 0 cpu 1 ----------------------------------------------------------------------------- del_timer(timer) succeeds del_timer(timer) lock_timer_base(timer) locks cpu_0_base add_timer_on(timer, 1) spin_lock(&cpu_1_base->lock) timer->flags set to cpu_1_base operates on @timer operates on @timer </quote> What's the difference between... timer->flags = (timer->flags & ~TIMER_BASEMASK) | cpu; and... timer_set_base(timer, base); ...that makes that fix unneeded prior to 4.2? We take the same locks in < 4.2 kernels, so seemingly both will diddle concurrently above. -Mike -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html