Dear list, I am using GCC to compile some modeling software (C++; http://code.google.com/p/openmalaria/). This uses quite a lot of "double"s and results can be quite sensitive to the exact outcomes of floating-point operations (this is due to stochastic influences, and only really a concern since we want to be able to reproduce results independent of the hardware/OS used). Anyway, on 32-bit Linux (Ubuntu 12.04 with the default GCC 4.6.3) some of my test cases generated different results when optimisation is enabled. (This doesn't happen on 64-bit Linux (identical Ubuntu/GCC versions) or our Windows or Mac builds.) On 32-bit linux results are as expected with no optimisation, but different when -O1 (or 2 or 3) is used, _however_, enabling all of the same optimisation flags (either as listed in the man page or as given by `gcc -O -Q -- help=optimizers`) (without listing -O) still produces different results. Is this a known issue? Is it the same with later GCC versions? Is there a way I can turn on most optimisations while still getting exactly the same results as without optimisation? Cheers, Diggory Hardy