On 13 December 2010 02:19, <richardcavell@xxxxxxxx> wrote: > Greetings, > > I would like to ask why it is that optimizations are manually controllable > to such a degree. The way I see it: Debug builds deliberately include a lot > of error-checking code that slows the code down and bloats its size. I don't think this is true. Often there are no additional checks, or there may be nothing more than a few assert() checks, which don't have a big impact. Many people leave assertions enabled in release builds anyway. > This > is undesirable in the 'release' version. It stands to reason that the > release version should run as fast as possible in most cases. So how come > there are degrees of optimization? When would it be undesirable to fully > optimize your code? > > * Some optimizations are undesirable because they only work, or only work > well, with a subset of the target architecture. > * Optimizations may increase code size, which is undesirable in an embedded > environment with very limited memory space. > * Developers of gcc and hobbyists may be interested in researching the value > of individual optimizations. > > Are there any others? Different optimisations benefit different code in different ways. Sometimes there is hardly any benefit to using -O3 over -O2, or even -O1, but applying all optimisations makes the compiler slower. I think "debug" vs "release" is often a false dichotomy. I want to be able to debug core files from "release builds" so I include debug symbols and might not use the most aggressive optimisations because they make it harder to debug. Does that make it a debug build even though it's how I build a release? There's a whole spectrum of use cases between what you call debug and release builds, so being able to control the compiler's optimisations allows users to choose where they want to be in that spectrum.