On Sun, Dec 25, 2016 at 5:16 PM, Nicholas Piggin <npiggin@xxxxxxxxx> wrote: > > I did actually play around with that. I could not get my skylake > to forward the result from a lock op to a subsequent load (the > latency was the same whether you use lock ; andb or lock ; andl > (32 cycles for my test loop) whereas with non-atomic versions I > was getting about 15 cycles for andb vs 2 for andl. Yes, interesting. It does look like the locked ops don't end up having the partial write issue and the size of the op doesn't matter. But it's definitely the case that the write buffer hit immediately after the atomic read-modify-write ends up slowing things down, so the profile oddity isn't just a profile artifact. I wrote a stupid test program that did an atomic increment, and then read either the same value, or an adjacent value in memory (so same instruvtion sequence, the difference just being what memory location the read accessed). Reading the same value after the atomic update was *much* more expensive than reading the adjacent value, so it causes some kind of pipeline hickup (by about 50% of the cost of the atomic op itself: iow, the "atomic-op followed by read same location" was over 1.5x slower than "atomic op followed by read of another location"). So the atomic ops don't serialize things entirely, but they *hate* having the value read (regardless of size) right after being updated, because it causes some kind of nasty pipeline issue. A cmpxchg does seem to avoid the issue. Linus -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>