Re: [Gluster-users] Memory leak in GlusterFS FUSE client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've applied client_cbk_cache_invalidation leak patch, and here are the results.

Launch:

===
valgrind --leak-check=full --show-leak-kinds=all --log-file="valgrind_fuse.log" /usr/bin/glusterfs -N --volfile-server=server.example.com --volfile-id=somevolume /mnt/somevolume
find /mnt/somevolume -type d
===

During the traversing, memory RSS value for glusterfs process went from 79M to 644M. Then I performed dropping VFS cache (as I did in previous tests), but RSS value was not affected. Then I did statedump:

https://gist.github.com/11c7b11fc99ab123e6e2

Then I unmounted the volume and got Valgrind log:

https://gist.github.com/99d2e3c5cb4ed50b091c

Leaks reported by Valgrind do not conform by their size to overall runtime memory consumption, so I believe with the latest patch some cleanup is being performed better on exit (unmount), but in runtime there are still some issues.

13.01.2016 12:56, Soumya Koduri написав:
On 01/13/2016 04:08 PM, Soumya Koduri wrote:


On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote:
Just in case, here is Valgrind output on FUSE client with 3.7.6 +
API-related patches we discussed before:

https://gist.github.com/cd6605ca19734c1496a4


Thanks for sharing the results. I made changes to fix one leak reported
there wrt ' client_cbk_cache_invalidation' -
     - http://review.gluster.org/#/c/13232/

The other inode* related memory reported as lost is mainly (maybe)
because fuse client process doesn't cleanup its memory (doesn't use
fini()) while exiting the process. Hence majority of those allocations
are listed as lost. But most of the inodes should have got purged when
we drop vfs cache. Did you do drop vfs cache before exiting the process?

I shall add some log statements and check that part

Also please take statedump of the fuse mount process (after dropping
vfs cache) when you see high memory usage by issuing the following
command -
	'kill -USR1 <pid-of-gluster-process>'

The statedump will be copied to 'glusterdump.<pid>.dump.tim
estamp` file in /var/run/gluster or /usr/local/var/run/gluster.
Please refer to [1] for more information.

Thanks,
Soumya
[1] http://review.gluster.org/#/c/8288/1/doc/debugging/statedump.md


Thanks,
Soumya

12.01.2016 08:24, Soumya Koduri написав:
For fuse client, I tried vfs drop_caches as suggested by Vijay in an
earlier mail. Though all the inodes get purged, I still doesn't see
much difference in the memory footprint drop. Need to investigate what
else is consuming so much memory here.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux