On Thu, Apr 17, 2014 at 11:03:57AM -0400, Waiman Long wrote: > +static __always_inline void > +clear_pending_set_locked(struct qspinlock *lock, u32 val) > +{ > + struct __qspinlock *l = (void *)lock; > + > + ACCESS_ONCE(l->locked_pending) = 1; > +} > @@ -157,8 +251,13 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval) > * we're pending, wait for the owner to go away. > * > * *,1,1 -> *,1,0 > + * > + * this wait loop must be a load-acquire such that we match the > + * store-release that clears the locked bit and create lock > + * sequentiality; this because not all try_clear_pending_set_locked() > + * implementations imply full barriers. You renamed the function referred in the above comment. > */ > - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) > + while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) > arch_mutex_cpu_relax(); > > /* -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html