Re: Fuse memleaks, all versions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 29/07/2016 à 18:39, Pranith Kumar Karampuri a écrit :


On Fri, Jul 29, 2016 at 2:26 PM, Yannick Perret <yannick.perret@xxxxxxxxxxxxx> wrote:
Ok, last try:
after investigating more versions I found that FUSE client leaks memory on all of them.
I tested:
- 3.6.7 client on debian 7 32bit and on debian 8 64bit (with 3.6.7 serveurs on debian 8 64bit)
- 3.6.9 client on debian 7 32bit and on debian 8 64bit (with 3.6.7 serveurs on debian 8 64bit)
- 3.7.13 client on debian 8 64bit (with 3.8.1 serveurs on debian 8 64bit)
- 3.8.1 client on debian 8 64bit (with 3.8.1 serveurs on debian 8 64bit)
In all cases compiled from sources, appart for 3.8.1 where .deb were used (due to a configure runtime error).
For 3.7 it was compiled with --disable-tiering. I also tried to compile with --disable-fusermount (no change).

In all of these cases the memory (resident & virtual) of glusterfs process on client grows on each activity and never reach a max (and never reduce).
"Activity" for these tests is cp -Rp and ls -lR.
The client I let grows the most overreached ~4Go RAM. On smaller machines it ends by OOM killer killing glusterfs process or glusterfs dying due to allocation error.

In 3.6 mem seems to grow continusly, whereas in 3.8.1 it grows by "steps" (430400 ko → 629144 (~1min) → 762324 (~1min) → 827860…).

All tests performed on a single test volume used only by my test client. Volume in a basic x2 replica. The only parameters I changed on this volume (without any effect) are diagnostics.client-log-level set to ERROR and network.inode-lru-limit set to 1024.

Could you attach statedumps of your runs?
The following link has steps to capture this(https://gluster.readthedocs.io/en/latest/Troubleshooting/statedump/ ). We basically need to see what are the memory types that are increasing. If you could help find the issue, we can send the fixes for your workload. There is a 3.8.2 release in around 10 days I think. We can probably target this issue for that?
Here are statedumps.
Steps:
1. mount -t glusterfs ldap1.my.domain:SHARE /root/MNT/ (here VSZ and RSS are 381896 35828)
2. take a dump with kill -USR1 <pid-of-glusterfs-process> (file glusterdump.n1.dump.1470042769)
3. perform a 'ls -lR /root/MNT | wc -l' (btw result of wc -l is 518396 :)) and a 'cp -Rp /usr/* /root/MNT/boo' (VSZ/RSS are 1301536/711992 at end of these operations)
4. take a dump with kill -USR1 <pid-of-glusterfs-process> (file glusterdump.n2.dump.1470043929)
5. do 'cp -Rp * /root/MNT/toto/', so on an other directory (VSZ/RSS are 1432608/909968 at end of this operation)
6. take a dump with kill -USR1 <pid-of-glusterfs-process> (file glusterdump.n3.dump.)

Dump files are gzip'ed because they are very large.
Dump files are here (too big for email):
http://wikisend.com/download/623430/glusterdump.n1.dump.1470042769.gz
http://wikisend.com/download/771220/glusterdump.n2.dump.1470043929.gz
http://wikisend.com/download/428752/glusterdump.n3.dump.1470045181.gz
(I keep the files if someone whats them in an other format)

Client and servers are installed from .deb files (glusterfs-client_3.8.1-1_amd64.deb and glusterfs-common_3.8.1-1_amd64.deb on client side).
They are all Debian 8 64bit. Servers are test machines that serve only one volume to this sole client. Volume is a simple x2 replica. I just changed for test network.inode-lru-limit value to 1024. Mount point /root/MNT is only used for these tests.

--
Y.


Attachment: smime.p7s
Description: Signature cryptographique S/MIME

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux