Why are there optimization levels (Noob question alert)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings,

I would like to ask why it is that optimizations are manually controllable to such a degree. The way I see it: Debug builds deliberately include a lot of error-checking code that slows the code down and bloats its size. This is undesirable in the 'release' version. It stands to reason that the release version should run as fast as possible in most cases. So how come there are degrees of optimization? When would it be undesirable to fully optimize your code?

* Some optimizations are undesirable because they only work, or only work well, with a subset of the target architecture. * Optimizations may increase code size, which is undesirable in an embedded environment with very limited memory space. * Developers of gcc and hobbyists may be interested in researching the value of individual optimizations.

Are there any others?

Richard




[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux