Dear gcc experts, In my simulation tool, I use somebody else's run-time library for a simulation kernel. This run-time library supports threads by allocating a local array called 'stackarea' and doing all kinds of stuff and long jumps etc. I don't quite understand. The library is used many times, it's stable and all bugs are supposed to be my own fault. /* Initialise the array, to fool compilers that it is being * used, so they don't optimize it away. * BTW don't write to the entire array, copy-on-write mechanism * make that very slow. (!) */ stackarea[0]=0; // for(i=1; i<STACKSIZE-1; i++) stackarea[i]=stackarea[i+1]; The system works perfectly fine when compiled without any optimization. Providing -O for the above file causes a segmentation fault for each execution very quickly. Adding the above commented for loop seems to solve this problem. Now, here's my question: - I have the feeling this statement stackarea[0]=0 might have fooled gcc version 3 compiler, but my gcc version 4.0.1 DOES optimize this strange array away. Is that correct? - How can I easily check whether a variable has been optimized away, and how can I avoid this? (e.g. should I write stackarea[random(0,N)] = 0 or is there a more neat way). Thanks in advance, Jonne. ps > Just noticed that stackarea[0] = stackarea[STACKSIZE-1] = 0 also does not "fool" my compiler (i.e. still results in a segmentation fault).