Re: [PATCH -v4 5/7] locking, arch: Update spin_unlock_wait()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 02, 2016 at 11:11:07PM +0800, Boqun Feng wrote:
> On Thu, Jun 02, 2016 at 04:44:24PM +0200, Peter Zijlstra wrote:
> > Let me go ponder that some :/
> > 
> 
> An intial thought of the fix is making queued_spin_unlock_wait() an
> atomic-nop too:
> 
> static inline void queued_spin_unlock_wait(struct qspinlock *lock)
> {
> 	struct __qspinlock *l = (struct __qspinlock *)lock;
> 	
> 	while (!cmpxchg(&l->locked, 0, 0))
> 		cpu_relax();
> }
> 
> This could make queued_spin_unlock_wait() a WRITE, with a smp_mb()
> preceding it, it would act like a RELEASE, which can be paired with
> spin_lock().
> 
> Just food for thought. ;-)

Not sure that'll actually work. The qspinlock store is completely
unordered and not part of a ll/sc or anything like that.

Doing competing stores might even result in loosing it entirely.

But I think I got something.. Lemme go test it :-)


--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Netfitler Users]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux