> -----Original Message----- > From: gcc-owner On Behalf Of Dave Korn > Sent: 17 November 2004 11:58 Just to enlarge slightly on my own response: > > -----Original Message----- > > From: Luca Benini > > Sent: 17 November 2004 11:41 > > > Robert Dewar wrote: > > > > > not! > > > > Now I see the light. > > > > > > But in this case the asm produced are not the same. > > > If the compiler had to produce the exact same asm at -O0 > and -O2, how on earth > could one be optimised more than the other? More than that: we all agree that the compiler is entitled to make assumptions that the input code is valid, and that if those assumptions are wrong the output code the compiler produces will be wrong. Now if you add increasing amounts of optimisations to the compilation (as when you go from -O0 to -O2), each of those extra optimisations makes further assumptions about the validity of the code. So when the code is invalid, more assumptions are violated at -O2 than at -O0; so it is only to be expected that the output code will be invalid in more and different ways. It might perhaps be possible to rewrite the optimisers so the code they generated always failed in the same way when the compiler was fed with bad input, but it would 1) cost a large amount of man-hours of work, and 2) involve not allowing the optimisers to make (as many of) those assumptions, which would in turn lead to 3) far less identifiable opportunities for optimisations when the compiler was fed with good code. So it would comprehensively not be worth the effort to make gcc give code that produced the same results when fed with this bad code. However as I mentioned, the behaviour of unsigned integers in overflow conditions IS well defined, and so if the compiler produced code that behaved differently at -O0 and -O2 when you took that example code and changed "int" to "unsigned int" everywhere, that *would* be a genuine compiler bug, and one that would need fixing. cheers, DaveK -- Can't think of a witty .sigline today....