Authoritative answer wanted: "-g -O1" vs. "-O1"
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
- To: gcc-help@xxxxxxxxxxx
- Subject: Authoritative answer wanted: "-g -O1" vs. "-O1"
- From: Gene Smith <gds@xxxxxxxxxxxxx>
- Date: Tue, 18 Jun 2013 01:21:28 -0400
- User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130110 Thunderbird/17.0.2
If I compile my embedded program with options "-g -O1" I obtain an elf
file with debug information. I can objcopy the elf file to a binary or
hex file that can be loaded to flash, effectively striping out the debug
information leaving only the optimized code in ROM.
But if I re-build with options the same except omit the -g option,
obviously I will have no symbols in the elf file making debugging
impossible or at least more difficult. However, when I object copy this
elf to a binary or hex file they are different somewhat than the binary
or hex produced with options -g present. At least with 4.7.3 the main
difference, as seen with objdump, is in the prologue to certain function
calls with only a few bytes different in total code length on a fairly
large embedded application (arm). So -g has some effect on the actual
code produced it appears.
Is this difference expected? Should -g cause changes in the actual code
generated and not just add debug symbols to the elf? Possibly it is
related to the optimization level? I have not checked to see if the
results differ with higher or lower levels than -O1.
I have seen several opinions regarding this but no authoritative answer.
The gcc manual also does not really answer this.
Thanks,
-gene
[Index of Archives]
[Linux C Programming]
[Linux Kernel]
[eCos]
[Fedora Development]
[Fedora Announce]
[Autoconf]
[The DWARVES Debugging Tools]
[Yosemite Campsites]
[Yosemite News]
[Linux GCC]