On Mon, Aug 14, 2017 at 05:38:39PM +0900, Minchan Kim wrote: > memory-barrier.txt always scares me. I have read it for a while > and IIUC, it seems semantic of spin_unlock(&same_pte) would be > enough without some memory-barrier inside mm_tlb_flush_nested. Indeed, see the email I just send. Its both spin_lock() and spin_unlock() that we care about. Aside from the semi permeable barrier of these primitives, RCpc ensures these orderings only work against the _same_ lock variable. Let me try and explain the ordering for PPC (which is by far the worst we have in this regard): spin_lock(lock) { while (test_and_set(lock)) cpu_relax(); lwsync(); } spin_unlock(lock) { lwsync(); clear(lock); } Now LWSYNC has fairly 'simple' semantics, but with fairly horrible ramifications. Consider LWSYNC to provide _local_ TSO ordering, this means that it allows 'stores reordered after loads'. For the spin_lock() that implies that all load/store's inside the lock do indeed stay in, but the ACQUIRE is only on the LOAD of the test_and_set(). That is, the actual _set_ can leak in. After all it can re-order stores after load (inside the lock). For unlock it again means all load/store's prior stay prior, and the RELEASE is on the store clearing the lock state (nothing surprising here). Now the _local_ part, the main take-away is that these orderings are strictly CPU local. What makes the spinlock work across CPUs (as we'd very much expect it to) is the address dependency on the lock variable. In order for the spin_lock() to succeed, it must observe the clear. Its this link that crosses between the CPUs and builds the ordering. But only the two CPUs agree on this order. A third CPU not involved in this transaction can disagree on the order of events. -- To unsubscribe from this list: send the line "unsubscribe linux-next" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html