On Wed, Feb 1, 2012 at 1:33 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > So I was talking to Paul yesterday and he mentioned how the SRCU sync > primitive has to use extra synchronize_sched() calls in order to avoid > smp_rmb() calls in the srcu_read_{un,}lock() calls. So that's probably a bad optimization these days, simply because smp_rmb() is totally free on x86. And on other architectures, it is *usually* a fairly cheap pipeline sync. But they mostly don't really matter, outside of ARM. > Now memory barriers are usually explained as observable order between > two (or more) unrelated variables, as Documentation/memory-barriers.txt > does in great detail. > > What I couldn't find in there though, is what happens when both > variables are on the same cacheline. The "The effects of the CPU cache" > and "Cache coherency" sections are closest but leave me wanting on this > point. > > Can we get some implicit behaviour from being on the same cacheline? Or > can this memory access queue still totally wreck the game? At least on alpha, the cacheline itself is subpartitioned into sectors, and accesses to different parts of the same cacheline can go to different sectors, and literally have ordering issues because a write from another CPU will update the sectors individually. This is where the insane "smp_read_barrier_depends()" comes from, iirc. So no, you cannot assume that a single cacheline is somehow "atomic" and inherently ordered. Also, even if you were to find an atomic sub-chunk, if you need a "smp_rmb()", what else would guarantee that the CPU core wouldn't re-order things to do the second read first, then lose the cacheline, re-read it, and then do the first read? So the reason smp_rmb() is free on x86 is that won't do that kind of re-ordering. Either because the uarch won't re-order the cache accesses of reads wrt each other in the first place or because the uarch makes sure that cachelines stay around until instructions have been retired in order. But other architectures that do need smp_rmb() can well re-order loads wildly even if they share a cacheline. But smp_rmb() and smp_wmb() are usually supposed *much* cheaper than a full barrier. Of course, various architectures can get it totally wrong, so.. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html