On 11/04/2011 23:42, Ian Lance Taylor wrote:
The definition of "memory barrier" is ambiguous when looking at code
written in a high-level language.
The statement "asm volatile ("" : : : "memory");" is a compiler
scheduling barrier for all expressions that load from or store values to
memory. That means something like a pointer dereference, an array
index, or an access to a volatile variable. It may or may not include a
reference to a local variable, as a local variable need not be in
memory.
Is there any precise specifications for what counts as "memory" here?
As gcc gets steadily smarter, it gets harder to be sure that
order-specific code really is correctly ordered, while letting the
compiler do it's magic on the rest of the code. I work with embedded
systems - it's not uncommon to have to deal with things like
interrupt-disabled sections which should be as short and fast as
possible, but no shorter!
For example, if you have code like this:
static int x;
void test(void) {
x = 1;
asm volatile ("" : : : "memory");
x = 2;
}
The variable "x" is not volatile - can the compiler remove the
assignment "x = 1"? Perhaps with aggressive optimisation, the compiler
will figure out how and when x is used, and discover that it doesn't
need to store it in memory at all, but can keep it in a register
(perhaps all uses have ended up inlined inside the same function). Then
"x" is no longer in memory - will it still be affected by the memory
clobber?
Also, is there any way to specify a more limited clobber than just
"memory", so that the compiler has as much freedom as possible? Typical
examples are to specify "clobbers" for just certain variables, leaving
others unaffected, or to distinguish between reads and writes. For
example, you might want to say "all writes should be completed by this
point, but data read into registers will stay valid".
Some of this can be done with volatile accesses in different ways, but
not always optimally, and not always clearly.