On Mon, Aug 12, 2013 at 07:02:57PM +0200, Oleg Nesterov wrote: > This is only theoretical, but after try_to_wake_up(p) was changed > to check p->state under p->pi_lock the code like > > __set_current_state(TASK_INTERRUPTIBLE); > schedule(); > > can miss a signal. This is the special case of wait-for-condition, > it relies on try_to_wake_up/schedule interaction and thus it does > not need mb() between __set_current_state() and if(signal_pending). > > However, this __set_current_state() can move into the critical > section protected by rq->lock, now that try_to_wake_up() takes > another lock we need to ensure that it can't be reordered with > "if (signal_pending(current))" check inside that section. > > The patch is actually one-liner, it simply adds smp_wmb() before > spin_lock_irq(rq->lock). This is what try_to_wake_up() already > does by the same reason. > > We turn this wmb() into the new helper, smp_mb__before_spinlock(), > for better documentation and to allow the architectures to change > the default implementation. > > While at it, kill smp_mb__after_lock(), it has no callers. > > Perhaps we can also add smp_mb__before/after_spinunlock() for > prepare_to_wait(). > > Signed-off-by: Oleg Nesterov <oleg@xxxxxxxxxx> Thanks! > +/* > + * Despite its name it doesn't necessarily has to be a full barrier. > + * It should only guarantee that a STORE before the critical section > + * can not be reordered with a LOAD inside this section. > + * So the default implementation simply ensures that a STORE can not > + * move into the critical section, smp_wmb() should serialize it with > + * another STORE done by spin_lock(). > + */ > +#ifndef smp_mb__before_spinlock > +#define smp_mb__before_spinlock() smp_wmb() > #endif I would have expected mention of the ACQUIRE of the lock keeping the LOAD inside the locked section. -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html