Re: [PATCH v2 3/4] MCS Lock: Barrier corrections

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/06/2013 07:20 AM, Will Deacon wrote:
On Wed, Nov 06, 2013 at 05:44:42AM +0000, Figo.zhang wrote:
2013/11/6 Tim Chen<tim.c.chen@xxxxxxxxxxxxxxx<mailto:tim.c.chen@xxxxxxxxxxxxxxx>>
On Tue, 2013-11-05 at 18:37 +0000, Will Deacon wrote:
On Tue, Nov 05, 2013 at 05:42:36PM +0000, Tim Chen wrote:
diff --git a/include/linux/mcs_spinlock.h b/include/linux/mcs_spinlock.h
index 96f14299..93d445d 100644
--- a/include/linux/mcs_spinlock.h
+++ b/include/linux/mcs_spinlock.h
@@ -36,16 +36,19 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
     node->locked = 0;
     node->next   = NULL;

+   /* xchg() provides a memory barrier */
     prev = xchg(lock, node);
     if (likely(prev == NULL)) {
             /* Lock acquired */
             return;
     }
     ACCESS_ONCE(prev->next) = node;
-   smp_wmb();
     /* Wait until the lock holder passes the lock down */
     while (!ACCESS_ONCE(node->locked))
             arch_mutex_cpu_relax();
+
+   /* Make sure subsequent operations happen after the lock is acquired */
+   smp_rmb();
Ok, so this is an smp_rmb() because we assume that stores aren't speculated,
right? (i.e. the control dependency above is enough for stores to be ordered
with respect to taking the lock)...

  }

  /*
@@ -58,6 +61,7 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod

     if (likely(!next)) {
             /*
+            * cmpxchg() provides a memory barrier.
              * Release the lock by setting it to NULL
              */
             if (likely(cmpxchg(lock, node, NULL) == node))
@@ -65,9 +69,14 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod
             /* Wait until the next pointer is set */
             while (!(next = ACCESS_ONCE(node->next)))
                     arch_mutex_cpu_relax();
+   } else {
+           /*
+            * Make sure all operations within the critical section
+            * happen before the lock is released.
+            */
+           smp_wmb();
...but I don't see what prevents reads inside the critical section from
moving across the smp_wmb() here.
This is to prevent any read in next critical section from
creeping up before write in the previous critical section
has completed
Understood, but an smp_wmb() doesn't provide any ordering guarantees with
respect to reads, hence why I think you need an smp_mb() here.

A major reason for the current design is to avoid overhead of a full memory barrier in x86 which doesn't need that. I do agree that the current code may not be enough for other architectures. I would like to propose that the following changes:

1) Move the lock/unlock functions to mcs_spinlock.c.
2) Define a set of primitives - smp_mb__before_critical_section(), smp_mb_after_critical_section() that will fall back to smp_mb() if they are not defined in asm/processor.h, for example. 3) Use the new primitives instead of the current smp_rmb() and smp_wmb() memory barrier.

That will allow each architecture to tailor what sort of memory barrier do they want to use.

Regards,
Longman

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]