On Thu, 28 Jun 2018, Andrea Parri wrote: > --- a/include/linux/spinlock.h > +++ b/include/linux/spinlock.h > @@ -114,29 +114,8 @@ do { \ > #endif /*arch_spin_is_contended*/ > > /* > - * This barrier must provide two things: > - * > - * - it must guarantee a STORE before the spin_lock() is ordered against a > - * LOAD after it, see the comments at its two usage sites. > - * > - * - it must ensure the critical section is RCsc. > - * > - * The latter is important for cases where we observe values written by other > - * CPUs in spin-loops, without barriers, while being subject to scheduling. > - * > - * CPU0 CPU1 CPU2 > - * > - * for (;;) { > - * if (READ_ONCE(X)) > - * break; > - * } > - * X=1 > - * <sched-out> > - * <sched-in> > - * r = X; > - * > - * without transitivity it could be that CPU1 observes X!=0 breaks the loop, > - * we get migrated and CPU2 sees X==0. > + * smp_mb__after_spinlock() provides a full memory barrier between po-earlier > + * lock acquisitions and po-later memory accesses. How about saying "provides the equivalent of a full memory barrier"? The point being that smp_mb__after_spinlock doesn't have to provide an actual barrier; it just has to ensure the behavior is the same as if a full barrier were present. Alan -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html