On Fri, Jun 04, 2021 at 02:27:49PM -0700, Linus Torvalds wrote: > On Fri, Jun 4, 2021 at 1:56 PM Paul E. McKenney <paulmck@xxxxxxxxxx> wrote: > > > > The usual way to prevent it is to use WRITE_ONCE(). > > The very *documentation example* for "volatile_if()" uses that WRITE_ONCE(). Whew! ;-) > IOW, the patch that started this discussion has this comment in it: > > +/** > + * volatile_if() - Provide a control-dependency > + * > + * volatile_if(READ_ONCE(A)) > + * WRITE_ONCE(B, 1); > + * > + * will ensure that the STORE to B happens after the LOAD of A. > > and my point is that I don't see *ANY THEORETICALLY POSSIBLE* way that > that "volatile_if()" could not be just a perfectly regular "if ()". > > Can you? I cannot, maybe due to failure of imagination. But please see below. > Because we *literally* depend on the fundamental concept of causality > to make the hardware not re-order those operations. > > That is the WHOLE AND ONLY point of this whole construct: we're > avoiding a possibly expensive hardware barrier operation, because we > know we have a more fundamental barrier that is INHERENT TO THE > OPERATION. > > And I cannot for the life of me see how a compiler can break that > fundamental concept of causality either. > > Seriously. Tell me how a compiler could _possibly_ turn that into > something that breaks the fundamental causal relationship. The same > fundamental causal relationship that is the whole and only reason we > don't need a memory barrier for the hardware. > > And no, there is not a way in hell that the above can be written with > some kind of semantically visible speculative store without the > compiler being a total pile of garbage that wouldn't be usable for > compiling a kernel with. > > If your argument is that the compiler can magically insert speculative > stores that can then be overwritten later, then MY argument is that > such a compiler could do that for *ANYTHING*. "volatile_if()" wouldn't > save us. > > If that's valid compiler behavior in your opinion, then we have > exactly two options: > > (a) give up > > (b) not use that broken garbage of a compiler. > > So I can certainly accept the patch with the simpler implementation of > "volatile_if()", but dammit, I want to see an actual real example > arguing for why it would be relevant and why the compiler would need > our help. > > Because the EXACT VERY EXAMPLE that was in the patch as-is sure as > hell is no such thing. > > If the intent is to *document* that "this conditional is part of a > load-conditional-store memory ordering pattern, then that is one > thing. But if that's the intent, then we might as well just write it > as > > #define volatile_if(x) if (x) > > and add a *comment* about why this kind of sequence doesn't need a > memory barrier. > > I'd much rather have that kind of documentation, than have barriers > that are magical for theoretical compiler issues that aren't real, and > don't have any grounding in reality. > > Without a real and valid example of how this could matter, this is > just voodoo programming. > > We don't actually need to walk three times widdershins around the > computer before compiling the kernel.That's not how kernel development > works. > > And we don't need to add a "volatile_if()" with magical barriers that > have no possibility of having real semantic meaning. > > So I want to know what the semantic meaning of volatile_if() would be, > and why it fixes anything that a plain "if()" wouldn't. I want to see > the sequence where that "volatile_if()" actually *fixes* something. Here is one use case: volatile_if(READ_ONCE(A)) { WRITE_ONCE(B, 1); do_something(); } else { WRITE_ONCE(B, 1); do_something_else(); } With plain "if", the compiler is within its rights to do this: tmp = READ_ONCE(A); WRITE_ONCE(B, 1); if (tmp) do_something(); else do_something_else(); On x86, still no problem. But weaker hardware could now reorder the store to B before the load from A. With volatile_if(), this reordering would be prevented. Thanx, Paul