Python eats lots of memory on x86_64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2008-05-05 at 23:21 +0100, Richard W.M. Jones wrote:
> On Mon, May 05, 2008 at 09:25:24AM -0400, Neal Becker wrote:
> > Nice, but I think it would be nicer to implement this directly in python
> > (ducks...)
> 
> So we can get all the advantages of consuming huge amounts of memory,

 So I've actually had a look at this, recently, mainly due to yum
resource usage on .i386 vs. x86_64 and this troll response to a troll
response is a good a place as any to put it, I think.

 So first I wrote a simplish program which just created new
yum.YumBase() objects and appended them to a list (numbers got from
parsing /proc/self/status) which gave me:

.x86_64     0 peak 219.90MB size 219.90MB rss  13.30MB
.x86_64     1 peak 219.90MB size 219.90MB rss  13.33MB
.x86_64 90001 peak 610.46MB size 610.46MB rss 403.75MB

.i386       0 peak  20.65MB size  20.65MB rss   9.61MB
.i386       1 peak  20.65MB size  20.65MB rss   9.63MB
.i386   90001 peak 212.77MB size 212.77MB rss 201.82MB

...which is about what we've seen when profiling yum itself, 2x for RSS
and much more for VSZ (10x to start with above, which is nice). Then I
added a "pmap" call right at the end, the most interesting bit of which
shows:

0000000000601000 449696K rw---    [ anon ]
[...]
00002aaaaab5a000  76136K r----  /usr/lib/locale/locale-archive
[...]
00002aaaafa8d000     20K r-x--  /usr/lib64/python2.5/lib-dynload/stropmodule.so
00002aaaafa92000   2044K -----  /usr/lib64/python2.5/lib-dynload/stropmodule.so
00002aaaafc91000      8K rw---  /usr/lib64/python2.5/lib-dynload/stropmodule.so

...on .x86_64, and taking single shared object as an example vs .i386:

00c58000             16K r-x--  /usr/lib/python2.5/lib-dynload/stropmodule.so
00c5c000              8K rwx--  /usr/lib/python2.5/lib-dynload/stropmodule.so
[...]
09290000         222296K rwx--    [ anon ]
[...]
b7d23000           2048K r----  /usr/lib/locale/locale-archive

...the locale archive x38 resource usage is explained by these lines
from glibc/locale/loadarchive.c:

      /* Map an initial window probably large enough to cover the header
         and the first locale's data.  With a large address space, we can
         just map the whole file and be sure everything is covered.  */

      mapsize = (sizeof (void *) > 4 ? archive_stat.st_size
                 : MIN (archive_stat.st_size, ARCHIVE_MAPPING_WINDOW));

      result = __mmap64 (NULL, mapsize, PROT_READ, MAP_FILE|MAP_COPY, fd, 0);

...which means any locale using C program gets an extra ~73MB of VSZ at
startup on .x86_64. And that any C program gets ~2MB VSZ per. shared
object it loads[1].

 The next interesting bit is that there are roughly 24 "anon" mappings
for .x86_64 and only 20 for .i386, a little investigation shows that
glibc is again the reason as the default M_MMAP_THRESHOLD doesn't expand
with size_t/time_t/etc. ... and setting MALLOC_MMAP_MAX_=0 produces the
same number of anon mappings on x86_64.

 At which point the numbers then add up as "simple doubling" as you go
from 4 byte size_t/time_t/intptr_t/etc. to 8 bytes for the same.

 HTH. HAND.


[1] I assume this dead space is for a reason (alignment?), and isn't
wasting any real memory (RSS implies this is true) ... although it is
far from obvious what is happening in both cases.

-- 
James Antill -- <james@xxxxxxxxxxxxxxxxx>
"Please, no.  Let's not pull in a dependency for something as simple as a
string library." -- Kristian Høgsberg <krh@xxxxxxxxxx>

-- 
fedora-devel-list mailing list
fedora-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-devel-list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux