Good afternoon, I have spent a while debugging an issue in an operating system I contribute to, and it comes down to the behaviour of optimizing code. We have code which is compiled for a native, or "hosted" (e.g. under linux) environment, and have code which does the following -: foo(struct mystruct1 *bar) { int somevar = 0; // variable is initialized. int anothervar = 0; #if (some clause to check if it is not hosted) asm volatile( <some asm block which sets "somevar" if a feature is enabled, otherwise leaves it as its initialized state> : "=m" (somevar) ); #endif if (somevar) { //use the feature bar->param = 20; anothervar = 1; } if (!anothervar) { bar->param = 10; } ) Without optimization, the code works as intended for both the native and hosted cases, but once optimization is enabled the following things appear to happen. (1) - gcc concludes the asm always overwrites somevar, and optimizes the initialization away leaving it uninitialized. (2) it doesn't now warn that the code is using an uninitialized variable but silently compiles it as if all is well. (3) at runtime if (somevar) has random contents, so runs the //use feature case when it shouldnt. Now, I can work around this by using "+" (even though I don't read the variable), or initializing the value a second time in the asm block - but should I have to? This sounds like the compiler is doing the wrong thing based on the assertion/assumption "=" means I will 100% change some variable, and worse after optimizing the code and creating a case where I now have an uninitialized variable being used - does not warn me this is happening? Would it not be better to interpret "=" to mean "may" change the value, which would prevent it incorrectly optimizing the initialization away, or if it does warn that the value may be used uninitialized? Yours, Nick Andrews.