Hi Folks, The behavior of a numerical code I'm working on varies drastically depending on whether I've compiled it with the -g or -O flags. The code's behavior under -g is much more stable, and I'm wondering if the -O flag is exposing a bug that I need to fix. Are there some gcc flags that I should try that might guide me in finding the problem? (I've already tried the obvious -Wall which gives no warnings.) In case it helps, I'm using both gcc 4.1.0 (SUSE Linux) and on gcc 4.0.1 (Mac PPC) which give similar behavior. Thanks! --Michael