On Tue, Jun 17, 2014 at 04:36:15PM -0400, Konrad Rzeszutek Wilk wrote: > On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote: > > Because the qspinlock needs to touch a second cacheline; add a pending > > bit and allow a single in-word spinner before we punt to the second > > cacheline. > > Could you add this in the description please: > > And by second cacheline we mean the local 'node'. That is the: > mcs_nodes[0] and mcs_nodes[idx] Those should be the very same cacheline :), but yes, I can add something like that. > Perhaps it might be better then to split this in the header file > as this is trying to not be a slowpath code - but rather - a > pre-slow-path-lets-try-if-we can do another cmpxchg in case > the unlocker has just unlocked itself. > > So something like: > > diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h > index e8a7ae8..29cc9c7 100644 > --- a/include/asm-generic/qspinlock.h > +++ b/include/asm-generic/qspinlock.h > @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val); > */ > static __always_inline void queue_spin_lock(struct qspinlock *lock) > { > - u32 val; > + u32 val, new; > > val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); > if (likely(val == 0)) > return; > + > + /* One more attempt - but if we fail mark it as pending. */ > + if (val == _Q_LOCKED_VAL) { > + new = Q_LOCKED_VAL |_Q_PENDING_VAL; > + > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == _Q_LOCKED_VAL) /* YEEY! */ > + return; > + val = old; > + } > queue_spin_lock_slowpath(lock, val); > } I think that's too big for an inline function. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html