Re: Removing unused code/variables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Fri, 2021-04-16 at 13:47 +0000, David Sherman wrote:
> We are using NXP's MCUXpresso to target ARM, and it uses gcc.  We have
> a large project with several different configurations.  The underlying
> library code gets compiled into libs and linked into the main
> application, and the main application has all the switches to enable or
> disable features.  Despite using -Os, many chunks of code are still
> present, as are static variables, even though they never get used.  We
> have tried turning on link time optimization, but it appears to have no
> effect.  Some static variables are optimized away, but many remain.  We
> have even tried making them part of their own section and using
> different linker directives that don't have that section specified, but
> they get linked in anyway.

I don’t know anything about MCUXpresso specifically, but let me expand
a bit on David Brown’s general remarks.

First of all, it’s not the job of -Os to remove unused code.  Unless
the source explicitly checks for the __OPTIMIZE_SIZE__ macro (rare, but
not unheard of), the compiler will compile more or less the same code
at any optimization level.  More optimization might enable it to
eliminate more code (e.g. if a C function declared 'static inline' ends
up inlined everywhere), but the dependency analysis is very
conservative and AFAIK setting -Os does not change anything about it. 
I don’t actually know if GCC is capable of dropping unused static
variables at all, by the way.  (This requires, for example, proving
that a pointer to the variable is never taken.)

Second, the compiler does not reason across compilation units, so it
can’t ever drop anything that’s not static or automatic.  The tool that
reasons across compilation units is the linker, and without LTO, the
linker thinks not in “objects” (functions and variables) but in
sections, and the GNU linker uses the linker script to decide what to
do with the sections.

The default behaviour is that every input section will end up
_somewhere_ in the output unless explicitly suppressed by /DISCARD/ in
the linker script.  If you want the GNU linker to do dependency
analysis on your sections, pass --gc-sections to it (thus -Wl,--gc-
sections when you are invoking it via GCC).  Then you can either choose
what goes into which section explicitly, by marking up your code with
__attribute__((section(...))) or [[gnu::section(...)]] (and making sure
that the linker script puts those sections where you want them), or you
can tell GCC to put every object in its own section, by passing it -
ffunction-sections -fdata-sections (and thus _making_ the linker think
in objects and not in generic sections like .text, .data, etc.).

The latter option _will_ increase your linking times, though if you’ll
notice that depends on the size of the codebase and on if you’re using
C++.  More importantly, --gc-sections is indiscriminate and will drop
_everything_ it thinks is not referenced (except as suppressed by
KEEP(...) in the linker script).  There _can_ be fragile bits of
linker-relying magic that are broken by this, including things deep in
the guts of your embedded toolchain.  For example, if there is an
__attribute__((constructor)) function that prepares a device and is
usually pulled in because it’s in the same source file as other
functions using that device, but is otherwise unreferenced, it _may or
may not_ get dropped and cause arbitrary levels of breakage.

The only _general_ fix for this problem is to somehow explain to the
toolchain what is going on in more detail:  you wouldn’t want to pull
in an initialization function for a device you’re _not_ using, would
you?  In principle LTO could infer some of the needed information from
your source;  I don’t rightly know why it doesn’t eliminate dead code
in your case or if it’s even supposed to.

Anyway, the traditional and time-tested way to explain things to your
toolchain is to forget about --gc-sections and the rest of that fancy
stuff and put the optional pieces in one or more static libraries, each
piece in its own object file (thus usually in its own source file as
well);  this is why every statically-linkable libc (which Glibc is
_not_ but Newlib is) is a heap of twisty little source files, from
abort.c to wscanf.c.  The linker _does_ do dependency analysis on
static libraries.

Specifically, the linker links in all things from left to right as long
as they are objects, but each time it encounters a static library, it
pulls in only those object files from it that can resolve one of the
currently unresolved symbols (and all the dependencies of those object
files inside that static library, recursively).  Note that it never
backtracks---a library has its chance to resolve undefined symbols
once, then it’s gone;  that’s why you put your LDFLAGS at the beginning
but your LDLIBS at the end.  Every traditional linker does this, even
Microsoft’s LINK.EXE (but _not_ LLVM’s lld, which does its own thing
that is ostensibly more user-friendly even if it’s more difficult to
describe).  GNU ld can additionally be told to do several passes over a
part of its command line until the link stabilizes by enclosing it with
-( ... -) aka --start-group ... --end-group, but I haven’t actually
seen a problem that required that capability.

So if you are writing C and have the freedom to reorganize your
codebase, put the library code in static libraries and just think of a
source file as a minimal unit of linking.  (Writing build scripts to
split large source files into small ones according to user-provided
directives, in case larger sources are more natural, is boring and left
as an exercise for the reader.)  If you are writing C++ ... I don’t
know, you can try, but any C++ toolchain needs to generate and link
loads and loads of things that don’t exactly belong to any object file
(inline functions, template instantiations, etc.), and I’ve no idea how
well this traditional approach will work.  If you can’t reorganize your
codebase, try --gc-sections and friends, as above, and check that your
IVTs, lists of initialization routines, etc. are getting constructed
correctly;  if they aren’t, massage your linker script until the link
works or you’re convinced reorganization is easier after all.

-- 
Cheers,
Alex

Attachment: signature.asc
Description: This is a digitally signed message part


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux