On 06/15/2016 03:08 PM, Waiman Long wrote:
On 06/15/2016 01:12 PM, Peter Zijlstra wrote:
On Wed, Jun 15, 2016 at 09:56:59AM -0700, Davidlohr Bueso wrote:
On Tue, 14 Jun 2016, Waiman Long wrote:
+++ b/kernel/locking/osq_lock.c
@@ -115,7 +115,7 @@ bool osq_lock(struct optimistic_spin_queue *lock)
* cmpxchg in an attempt to undo our queueing.
*/
- while (!READ_ONCE(node->locked)) {
+ while (!smp_load_acquire(&node->locked)) {
Hmm this being a polling path, that barrier can get pretty expensive
and
last I checked it was unnecessary:
I think he'll go rely on it later on.
In any case, its fairly simple to cure, just add
smp_acquire__after_ctrl_dep() at the end. If we bail because
need_resched() we don't need the acquire I think.
Yes, I only need the acquire barrier when the locking is successful.
Thanks for the suggestion. I will make the change accordingly.
BTW, when will the smp_acquire__after_ctrl_dep() patch goes into the tip
tree? My patch will have a dependency on that when I make the change.
Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-arch" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html