Re: [PATCH v3 3/5] MCS Lock: Barrier corrections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry about the HTML crap, the internet connection is too slow for my normal email habits, so I'm using my phone.

I think the barriers are still totally wrong for the locking functions.

Adding an smp_rmb after waiting for the lock is pure BS. Writes in the locked region could percolate out of the locked region.

The thing is, you cannot do the memory ordering for locks in any same generic way. Not using our current barrier system. On x86 (and many others) the smp_rmb will work fine, because writes are never moved earlier. But on other architectures you really need an acquire to get a lock efficiently. No separate barriers. An acquire needs to be on the instruction that does the lock.

Same goes for unlock. On x86 any store is a fine unlock, but on other architectures you need a store with a release marker.

So no amount of barriers will ever do this correctly. Sure, you can add full memory barriers and it will be "correct" but it will be unbearably slow, and add totally unnecessary serialization. So *correct* locking will require architecture support.

     Linus

On Nov 7, 2013 6:37 AM, "Tim Chen" <tim.c.chen@xxxxxxxxxxxxxxx> wrote:
This patch corrects the way memory barriers are used in the MCS lock
and removes ones that are not needed. Also add comments on all barriers.

Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
Signed-off-by: Jason Low <jason.low2@xxxxxx>
Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
---
 include/linux/mcs_spinlock.h |   13 +++++++++++--
 1 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/include/linux/mcs_spinlock.h b/include/linux/mcs_spinlock.h
index 96f14299..93d445d 100644
--- a/include/linux/mcs_spinlock.h
+++ b/include/linux/mcs_spinlock.h
@@ -36,16 +36,19 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
        node->locked = 0;
        node->next   = NULL;

+       /* xchg() provides a memory barrier */
        prev = xchg(lock, node);
        if (likely(prev == NULL)) {
                /* Lock acquired */
                return;
        }
        ACCESS_ONCE(prev->next) = node;
-       smp_wmb();
        /* Wait until the lock holder passes the lock down */
        while (!ACCESS_ONCE(node->locked))
                arch_mutex_cpu_relax();
+
+       /* Make sure subsequent operations happen after the lock is acquired */
+       smp_rmb();
 }

 /*
@@ -58,6 +61,7 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod

        if (likely(!next)) {
                /*
+                * cmpxchg() provides a memory barrier.
                 * Release the lock by setting it to NULL
                 */
                if (likely(cmpxchg(lock, node, NULL) == node))
@@ -65,9 +69,14 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod
                /* Wait until the next pointer is set */
                while (!(next = ACCESS_ONCE(node->next)))
                        arch_mutex_cpu_relax();
+       } else {
+               /*
+                * Make sure all operations within the critical section
+                * happen before the lock is released.
+                */
+               smp_wmb();
        }
        ACCESS_ONCE(next->locked) = 1;
-       smp_wmb();
 }

 #endif /* __LINUX_MCS_SPINLOCK_H */
--
1.7.4.4




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]