RE: memory consumption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Administrator also had to change something.
System had 300+ MB of swap, 14GB of RAM, but no "overcommit" or somesuch, so I guess could only allocate up to swap. I guess that's the thing -- mmap against no file allocates out of paging space, no matter available physical RAM? Depending on OS and configuration?
 
 
I finally noticed
 
http://gcc.gnu.org/install/specific.html#alpha-dec-osf51
 =>  http://gcc.gnu.org/ml/gcc/2002-08/msg00822.html
 
 
and had/have experimented some with that, no luck.
Given the system configuration change, not sure I can go back.
 
 
I tried -with-gc=zone (the mail says -with-gc=simple, but that doesn't work any longer).
-with-gc=zone segfaulted at some point.
 
 
I'll see if that occurs under Linux.
 (ie: since OSF doesn't matter much)
 
I wonder if -with-gc=zone gets much testing?
  And if any systems lack mmap now? (or suitable simulation on Windows)
 
 
 - Jay


----------------------------------------
> From: jay.krell@xxxxxxxxxxx
> To: iant@xxxxxxxxxx
> CC: gcc-help@xxxxxxxxxxx
> Subject: RE: memory consumption
> Date: Sun, 6 Jun 2010 16:26:28 +0000
>
>
> shorty story: nevermind, sorry, ulimit!
>
>
> long story:
>
> -O1 doesn't help.
> 41K lines, no optimization, no debugging, file at a time compilation, function at a time codegen, 64MB seems excessive. ?
>
>
> gcc cross compilation is nice, but it is hampered by having to get a "sysroot".
>
>
>
> But anyway I finally read the ulimit and gcc manuals...since address space should be essentially unlimited, I thought this all odd.
>
>
> bash-4.1$ ulimit -a
> core file size (blocks, -c) unlimited
> data seg size (kbytes, -d) 131072
> file size (blocks, -f) unlimited
> max memory size (kbytes, -m) 12338744
> open files (-n) 4096
> pipe size (512 bytes, -p) 8
> stack size (kbytes, -s) 8192
> cpu time (seconds, -t) unlimited
> max user processes (-u) 64
> virtual memory (kbytes, -v) 4194304
>
>
> bash-4.1$ ulimit -d 1000000
> bash-4.1$ ulimit -a
> core file size (blocks, -c) unlimited
> data seg size (kbytes, -d) 1000000
> file size (blocks, -f) unlimited
> max memory size (kbytes, -m) 12338744
> open files (-n) 4096
> pipe size (512 bytes, -p) 8
> stack size (kbytes, -s) 8192
> cpu time (seconds, -t) unlimited
> max user processes (-u) 64
> virtual memory (kbytes, -v) 4194304
>
>
> and http://gcc.gnu.org/install/specific.html
>
>
> "Depending on
> the OS version used, you need a data segment size between 512 MB and
> 1 GB, so simply use ulimit -Sd unlimited.
>
> "
>
> Oops!
>
>
> I had run into this on AIX so should have known better.
> Besides that, that page documents a lot of nonobvious useful stuff (e.g. I've also bootstrapped on HP-UX, going via K&R 3.x).
> (On the other hand, there are also non-huge files in libjava at least a few years ago that use excessive stack. Excessive stack seems worse than excessive heap).
>
>
> Sorry sorry, mostly nevermind, move along..
> (still seems excessive, but I also don't see the point in such OS limits, give me all the address space and let me trash if working set is high -- gcc should be more concerned with working set than address space, and I have no data on working set here nor there)
>
>
> - Jay
>
> ----------------------------------------
>> To: jay.krell@xxxxxxxxxxx
>> CC: gcc-help@xxxxxxxxxxx
>> Subject: Re: memory consumption
>> From: iant@xxxxxxxxxx
>> Date: Sat, 5 Jun 2010 22:53:05 -0700
>>
>> Jay K writes:
>>
>>> I hit similar problems building gcc in virtual machines that I think had 256MB. I increased them to 384,
>>>
>>>
>>> Maybe gcc should monitor its maximum memory? And add a switch
>>> -Werror-max-memory=64MB, and use that when compiling itself, at
>>> least in bootstrap with optimizations and possibly debugging
>>> disabled? Or somesuch?
>>
>> A --param setting the amount of memory required is a good idea for
>> testing purposes. However, frankly, it would very unlikely that we
>> would set it to a number as low as 64MB. New computers these days
>> routinely ship with 1G RAM. Naturally gcc should continue to run on
>> old computers, but gcc is always going to require virtual memory, and
>> on a virtual memory system I really don't think 512MB or 1G of virtual
>> memory is unreasonable these days.
>>
>> It would be folly to let old computers constrain gcc's ability to
>> optimize on modern computers. A better approach is to use gcc's
>> well-tested ability to cross-compile from a modern computer to your
>> old computer.
>>
>>
>>> I guess I can just make do with 4.3.5 built with host cc.
>>> Maybe I'll try splitting up some of the files. Is that viable to be applied for real?
>>
>> I think you will make better progress by using -O1 when you compile.
>>
>> Ian
> 		 	   		  



[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux