Re: [PATCH v6 4/5] MCS Lock: Barrier corrections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 26, 2013 at 10:59:45AM +0100, Peter Zijlstra wrote:
> On Mon, Nov 25, 2013 at 03:52:52PM -0800, Paul E. McKenney wrote:
> > On Mon, Nov 25, 2013 at 07:27:15PM +0100, Peter Zijlstra wrote:
> > > On Mon, Nov 25, 2013 at 10:02:50AM -0800, Paul E. McKenney wrote:
> > > > And if the two locks are different, then the guarantee applies only
> > > > when the unlock and lock are on the same CPU, in which case, as Linus
> > > > noted, the xchg() on entry to the slow path does the job for use.
> > > 
> > > But in that case we rely on the fact that the thing is part of a
> > > composite and we should no longer call it load_acquire, because frankly
> > > it doesn't have acquire semantics anymore because the read can escape
> > > out.
> > 
> > Actually, load-acquire and store-release are only required to provide
> > ordering in the threads/CPUs doing the load-acquire/store-release
> > operations.  It is just that we require something stronger than minimal
> > load-acquire/store-release to make a Linux-kernel lock.
> 
> I suspect we're talking past one another here; but our Document
> describes ACQUIRE/RELEASE semantics such that
> 
>   RELEASE
>   ACQUIRE
> 
> matches a full barrier, regardless on whether it is the same lock or
> not.

Ah, got it!

> If you now want to weaken this definition, then that needs consideration
> because we actually rely on things like
> 
> spin_unlock(l1);
> spin_lock(l2);
> 
> being full barriers.
> 
> Now granted, for lock operations we have actual atomic ops in between
> which would cure x86, but it would leave us confused with the barrier
> semantics.
> 
> So please; either: 
> 
> A) we have the strong ACQUIRE/RELEASE semantics as currently described;
>    and therefore any RELEASE+ACQUIRE pair must form a full barrier; and
>    our propose primitives are non-compliant and needs strengthening.
> 
> B) we go fudge about with the definitions.

Another approach would be to have local and global variants, so that
the local variants have acquire/release semantics that are guaranteed
to be visible only in the involved threads (sufficient for circular
buffers) while the global ones are visible globally, thus sufficient
for queued locks.

> But given the current description of our ACQUIRE barrier, we simply
> cannot claim the proposed primitives are good on x86 IMO.
> 
> Also, instead of the smp_store_release() I would argue that
> smp_load_acquire() is the one that needs the full buffer, even on PPC.
> 
> Because our ACQUIRE dis-allows loads/stores leaking out upwards, and
> both TSO and PPC lwsync allow just that, so the smp_load_acquire() is
> the one that needs the full barrier.

You lost me on this one.  Here is x86 ACQUIRE for X:

	r1 = ACCESS_ONCE(X);
	<loads and stores>

Since x86 does not reorder loads with later loads or stores, this should
be sufficience.

For powerpc:

	r1 = ACCESS_ONCE(X);
	lwsync;
	<loads and stores>

And lwsync does not allow prior loads to be reordered with later loads or
stores, so this should also be sufficient.

In both cases, a RELEASE+ACQUIRE provides a full barrier as long as
RELEASE has the right stuff in it.

So what am I missing?

							Thanx, Paul

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]