I was asked for input on this, and after a few days digging through some history, thought I'd comment. Hope you don't mind. On Thu, Jun 25, 2020 at 10:57AM +0200, Peter Zijlstra wrote: > On Thu, Jun 25, 2020 at 10:24:33AM +0200, Peter Zijlstra wrote: > > On Thu, Jun 25, 2020 at 10:03:13AM +0200, Peter Zijlstra wrote: > > > I'm sure Will will respond, but the basic issue is the trainwreck C11 > > > made of dependent loads. > > > > > > Anyway, here's a link to the last time this came up: > > > > > > https://lore.kernel.org/linux-arm-kernel/20171116174830.GX3624@xxxxxxxxxxxxxxxxxx/ > > > > Another good read: > > > > https://lore.kernel.org/lkml/20150520005510.GA23559@xxxxxxxxxxxxxxxxxx/ [...] > Because now the machine can speculate and load now before seq, breaking > the ordering. First of all, I agree with the concerns, but not because of LTO. To set the stage better, and summarize the fundamental problem again: we're in the unfortunate situation that no compiler today has a way to _efficiently_ deal with C11's memory_order_consume [https://lwn.net/Articles/588300/]. If we did, we could just use that and be done with it. But, sadly, that doesn't seem possible right now -- compilers just say consume==acquire. Will suggests doing the same in the kernel: https://lkml.kernel.org/r/20200630173734.14057-19-will@xxxxxxxxxx What we're most worried about right now is the existence of compiler transformations that could break data dependencies by e.g. turning them into control dependencies. If this is a real worry, I don't think LTO is the magical feature that will uncover those optimizations. If these compiler transformations are real, they also exist in a normal build! And if we are worried about them, we need to stop relying on dependent load ordering across the board; or switch to -O0 for everything. Clearly, we don't want either. Why do we think LTO is special? With LTO, Clang just emits LLVM bitcode instead of ELF objects, and during the linker stage intermodular optimizations across translation unit boundaries are done that might not be possible otherwise [https://llvm.org/docs/LinkTimeOptimization.html]. From the memory model side of things, if we could fully convey our intent to the compiler (the imaginary consume), there would be no problem, because all optimization stages from bitcode generation to the final machine code generation after LTO know about the intended semantics. (Also, keep in mind that LTO is _not_ doing post link optimization of machine code binaries!) But as far as we can tell, there is no evidence of the dreaded "data dependency to control dependency" conversion with LTO that isn't there in non-LTO builds, if it's even there at all. Has the data to control dependency conversion been encountered in the wild? If not, is the resulting reaction an overreaction? If so, we need to be careful blaming LTO for something that it isn't even guilty of. So, we are probably better off untangling LTO from the story: 1. LTO or no LTO does not matter. The LTO series should not get tangled up with memory model issues. 2. The memory model question and problems need to be answered and addressed separately. Thoughts? Thanks, -- Marco