Hello, Peter. On Mon, Apr 25, 2016 at 06:22:01PM -0700, Peter Hurley wrote: > This is the same bug I wrote about 2 yrs ago (but with the wrong fix). > > http://lkml.iu.edu/hypermail/linux/kernel/1402.2/04697.html > > Unfortunately I didn't have a reproducer at all :/ Ah, bummer. > The atomic_long_xchg() patch has several benefits over the naked barrier: > > 1. set_work_pool_and_clear_pending() has the same requirements as > clear_work_data(); note that both require write barrier before and > full barrier after. clear_work_data() is only used by __cancel_work_timer() and there's no following execution or anything where rescheduling memory loads can cause any issue. > 2. xchg() et al imply full barrier before and full barrier after. > > 3. The naked barriers could be removed, while improving efficiency. > On x86, mov + mfence => xchg It's unlikely to make any measureable difference. Is xchg() actually cheaper than store + rmb? > 4. Maybe fixes other hidden bugs. > For example, I'm wondering if reordering with set_work_pwq/list_add_tail > would be a problem; ie., what if work is visible on the worklist _before_ > data is initialized by set_work_pwq()? Worklist is always accessed under the pool lock. The barrier comes into play only because we're using bare PENDING bit for synchronization. I'm not necessarily against making all clearings of PENDING to be followed by a rmb or use xhcg. Reasons 2-4 are pretty weak tho. Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html