Re: Reducing compilation memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alejandro Pulver wrote:
On Thu, 17 Jan 2008 14:22:33 -0500
Tony Wetmore <tony.wetmore@xxxxxxxxxxxx> wrote:

Alejandro Pulver wrote:
 > As it fails with -O1 and all -fno-*, I tried parameters like
 > max-pending-list-length=1 without success.
 >
 > Do you know about an option or something that could help in this case?

Alejandro,


Hello.

Thank you for your reply.

I may have missed this earlier, but are the source files causing GCC to crash particularly large? Perhaps the problem could be avoided by splitting the code into more (smaller) files, so that GCC is compiling less of the code at once.


Yes, the source is 4MB, consisting of one function with a jump table
and ~8000 labels (with little code on each), to simulate each machine
instruction.

GCC doesn't crash, just outputs something like this and exits:

cc1: out of memory allocating 4072 bytes after a total of 1073158016 bytes

If I understand what you are doing, this code is program-generated, so you would have to modify the generator program to create multiple files. But you might be able to test this by manually splitting one file yourself.

Good luck!


I really don't know if I could split it, as it's inside the same
function (all the ~8000 labels with very short code), and it uses
computed gotos (like the ones generated by a switch statement, with a
"jump table" so there is no comparison of the cases), so this may not
work in different files (at least 2 jump tables are used instead of 1,
using a dictionary-like setup, see below).

For example, I've seen a Motorola 68K emulator (included in Generator,
a SEGA Genesis emulator) generating 16 C files, each one containing the
code for instructions starting with 0 to F (in hexadecimal). It uses
switch and case statements for the jump table (not computed gotos),
which is more or less the same in practice. I haven't fully read the
source, but it seems that has 2 jump tables to avoid this problem.

I'll see if they can be split, but I just was surprised when I tried
with GCC 4.x and actually compiled it (without optimizations). So I
thought there could be a way to optimize it by telling GCC specific
information about how to do it (as optimizing each block independently
should work fine).

Yeah, but that's not what gcc does -- we hold an entire function as trees and a data-flow graph, and then we optimize the whole thing. In your case the behaviour of gcc is not at all surprising, and the obvious way to solve your problem is to go out and buy more
RAM!  Of course we could make gcc more economical, and we could somewhat
reduce memory usage, but you're asking for something really hard.

Andrew.


[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux