There are 11 interpretations of the requirements described in the header comment for smp_mb__after_spinlock(): one for each LKMM maintainer, and one currently encoded in the Cat file. Stick to the latter (until a more satisfactory solution is presented/agreed). Signed-off-by: Andrea Parri <andrea.parri@xxxxxxxxxxxxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Will Deacon <will.deacon@xxxxxxx> Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> --- include/linux/spinlock.h | 25 ++----------------------- 1 file changed, 2 insertions(+), 23 deletions(-) diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 1e8a464358384..6737ee2381d50 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -114,29 +114,8 @@ do { \ #endif /*arch_spin_is_contended*/ /* - * This barrier must provide two things: - * - * - it must guarantee a STORE before the spin_lock() is ordered against a - * LOAD after it, see the comments at its two usage sites. - * - * - it must ensure the critical section is RCsc. - * - * The latter is important for cases where we observe values written by other - * CPUs in spin-loops, without barriers, while being subject to scheduling. - * - * CPU0 CPU1 CPU2 - * - * for (;;) { - * if (READ_ONCE(X)) - * break; - * } - * X=1 - * <sched-out> - * <sched-in> - * r = X; - * - * without transitivity it could be that CPU1 observes X!=0 breaks the loop, - * we get migrated and CPU2 sees X==0. + * smp_mb__after_spinlock() provides a full memory barrier between po-earlier + * lock acquisitions and po-later memory accesses. * * Since most load-store architectures implement ACQUIRE with an smp_mb() after * the LL/SC loop, they need no further barriers. Similarly all our TSO -- 2.7.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html