Re: [Gluster-devel] Memory leak in GlusterFS FUSE client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If this message appears way before the volume is unmounted, can you try to start the volume manually using this command and repeat the tests ?

glusterfs --fopen-keep-cache=off --volfile-server=<server> --volfile-id=/<volume> <mount point>

This will prevent invalidation requests to be sent to the kernel, so there shouldn't be any memory leak even if the worker thread exits prematurely.

If that solves the problem, we could try to determine the cause of the premature exit and solve it.

Xavi


On 20/01/16 10:08, Oleksandr Natalenko wrote:
Yes, there are couple of messages like this in my logs too (I guess one
message per each remount):

===
[2016-01-18 23:42:08.742447] I [fuse-bridge.c:3875:notify_kernel_loop] 0-
glusterfs-fuse: kernel notifier loop terminated
===

On середа, 20 січня 2016 р. 09:51:23 EET Xavier Hernandez wrote:
I'm seeing a similar problem with 3.7.6.

This latest statedump contains a lot of gf_fuse_mt_invalidate_node_t
objects in fuse. Looking at the code I see they are used to send
invalidations to kernel fuse, however this is done in a separate thread
that writes a log message when it exits. On the system I'm seeing the
memory leak, I can see that message in the log files:

[2016-01-18 23:04:55.384873] I [fuse-bridge.c:3875:notify_kernel_loop]
0-glusterfs-fuse: kernel notifier loop terminated

But the volume is still working at this moment, so any future inode
invalidations will leak memory because it was this thread that should
release it.

Can you check if you also see this message in the mount log ?

It seems that this thread terminates if write returns any error
different than ENOENT. I'm not sure if there could be any other error
that can cause this.

Xavi

On 20/01/16 00:13, Oleksandr Natalenko wrote:
Here is another RAM usage stats and statedump of GlusterFS mount
approaching to just another OOM:

===
root     32495  1.4 88.3 4943868 1697316 ?     Ssl  Jan13 129:18
/usr/sbin/
glusterfs --volfile-server=server.example.com --volfile-id=volume
/mnt/volume ===

https://gist.github.com/86198201c79e927b46bd

1.6G of RAM just for almost idle mount (we occasionally store Asterisk
recordings there). Triple OOM for 69 days of uptime.

Any thoughts?

On середа, 13 січня 2016 р. 16:26:59 EET Soumya Koduri wrote:
kill -USR1

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux