Re: [PATCH v6 4/5] MCS Lock: Barrier corrections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 22, 2013 at 11:06:41AM -0800, Linus Torvalds wrote:
> On Fri, Nov 22, 2013 at 10:49 AM, Paul E. McKenney
> <paulmck@xxxxxxxxxxxxxxxxxx> wrote:
> >
> > You see, my problem is not the "crazy ordering" DEC Alpha, Itanium,
> > PowerPC, or even ARM.  It is really obvious what instructions to use in
> > a stiffened-up smp_store_release() for those guys: "mb" for DEC Alpha,
> > "st.rel" for Itanium, "sync" for PowerPC, and "dmb" for ARM.  Believe it
> > or not, my problem is instead with good old tightly ordered x86.
> >
> > We -could- just put an mfence into x86's smp_store_release() and
> > be done with it
> 
> Why would you bother?  The *acquire* has a memory barrier. End of
> story. On x86, it has to (since otherwise a load inside the locked
> region could be re-ordered wrt the write that takes the lock).

I am sorry, but that is not always correct.  For example, in the contended
case for Tim Chen's MCS queued locks, the x86 acquisition-side handoff
code does -not- contain any stores or memory-barrier instructions.
Here is that portion of the arch_mcs_spin_lock() code, along with the
x86 definition for smp_load_acquire:

+       while (!(smp_load_acquire(&node->locked)))                      \
+               arch_mutex_cpu_relax();                                 \

+#define smp_load_acquire(p)                                            \
+({                                                                     \
+       typeof(*p) ___p1 = ACCESS_ONCE(*p);                             \
+       compiletime_assert_atomic_type(*p);                             \
+       barrier();                                                      \
+       ___p1;                                                          \
+})

No stores, no memory-barrier instructions.

Of course, the fact that there are no stores means that on x86 the
critical section cannot leak out, even with no memory barrier.  That is
the easy part.  The hard part is if we want unlock+lock to be a full
memory barrier for MCS lock from the viewpoint of code not holding
that lock.  We clearly cannot rely on the non-existent memory barrier
in the acquisition handoff code.

And yes, there is a full barrier implied by the xchg() further up in
arch_mcs_spin_lock(), shown in full below, but that barrier is before
the handoff code, so that xchg() cannot have any effect on the handoff.
That xchg() therefore cannot force unlock+lock to act as a full memory
barrier in the contended queue-handoff case.

> Basically, any time you think you need to add a memory barrier on x86,
> you should go "I'm doing something wrong". It's that simple.

It -appears- that the MCS queue handoff code is one of the many cases
where we don't need a memory barrier on x86, even if we do want MCS
unlock+lock to be a full memory barrier.  But I wouldn't call it simple.
I -think- we do have a proof, but I don't yet totally trust it.

							Thanx, Paul


>                   Linus

------------------------------------------------------------------------

+#define arch_mcs_spin_lock(lock, node)                                 \
+{                                                                      \
+       struct mcs_spinlock *prev;                                      \
+                                                                       \
+       /* Init node */                                                 \
+       node->locked = 0;                                               \
+       node->next   = NULL;                                            \
+                                                                       \
+       /* xchg() provides a memory barrier */                          \
+       prev = xchg(lock, node);                                        \
+       if (likely(prev == NULL)) {                                     \
+               /* Lock acquired */                                     \
+               return;                                                 \
+       }                                                               \
+       ACCESS_ONCE(prev->next) = node;                                 \
+       /*                                                              \
+        * Wait until the lock holder passes the lock down.             \
+        * Using smp_load_acquire() provides a memory barrier that      \
+        * ensures subsequent operations happen after the lock is       \
+        * acquired.                                                    \
+        */                                                             \
+       while (!(smp_load_acquire(&node->locked)))                      \
+               arch_mutex_cpu_relax();                                 \

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]