Authoritative answer wanted: "-g -O1" vs. "-O1"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If I compile my embedded program with options "-g -O1" I obtain an elf file with debug information. I can objcopy the elf file to a binary or hex file that can be loaded to flash, effectively striping out the debug information leaving only the optimized code in ROM.

But if I re-build with options the same except omit the -g option, obviously I will have no symbols in the elf file making debugging impossible or at least more difficult. However, when I object copy this elf to a binary or hex file they are different somewhat than the binary or hex produced with options -g present. At least with 4.7.3 the main difference, as seen with objdump, is in the prologue to certain function calls with only a few bytes different in total code length on a fairly large embedded application (arm). So -g has some effect on the actual code produced it appears.

Is this difference expected? Should -g cause changes in the actual code generated and not just add debug symbols to the elf? Possibly it is related to the optimization level? I have not checked to see if the results differ with higher or lower levels than -O1.

I have seen several opinions regarding this but no authoritative answer. The gcc manual also does not really answer this.


Thanks,
-gene













[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux