Re: reduce compilation times?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



NightStrike wrote:
On Nov 27, 2007 11:43 AM, Tom St Denis <tstdenis@xxxxxxxxxxxxxxxx> wrote:
This is why you should re-factor your code as to contain only one [or as
few as possible] exportable functions per unit.

Just so I understand (and I realize that this would not be done), but
let's say that I have a machine that can compile extraordinarily
quickly, and compile time was not a factor.  Is there a difference in
the speed of the resulting program when everything is split into many
object files instead of being combined into a single main.c, or is the
resulting binary identical bit for bit?

The only time it would matter (legally) is if there was inline'ing. And really, you should be setting that up yourself with the "inline" tag (or macros).

suppose you had something like

int myfunc(int x)
{
  return x * x + x * x;
}

and you only called it from main like

int main(void)
{
  int res;
  res = myfunc(0);
}

Can the compiler special case optimize it? Well, strictly yes, the compiler could inline "myfunc" then reduce it. Suppose "myfunc" is more complicated or larger and it couldn't be inlined. If the compiler could determine the result at buildtime it would be legal to optimize it out, but if it can't it won't and it will call the function. So really, in all but the trivial cases [dead code, etc] having everything in one unit, especially when your functions aren't static, won't help reduce code size or speed.

What you really should do, is profile your code, then create "static inline" or macro copies of heavily used (and not overly large) pieces of code. And even then, inlining code doesn't always help.
Putting everything in one big file has several disadvantages though:

- It increases build time, every time you build it [which could be 1000s of times] - It makes content control harder since you have to lock larger portions of the project to work on it
-  It makes editing harder as you have more to scroll/look through
- It decreases [not always though] the ability to use smart linking, which can increase image size - It makes building on smaller machines [with less ram, slower processors, etc] harder

Ideally, but this isn't a hard set rule, you want to keep each source file under 200-300 lines (excluding tables). It's not a sin to violate it here or there where it makes sense. Most of the time though, it's a good idea to try for it.

In both of my OSS projects, the average file has 1 function in it, and is ~150-200 lines per file. The exceptions being machine generated code (e.g. unrolled multipliers), and lookup tables for hashes/ciphers.
Tom

[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux