I'm seeing a similar problem with 3.7.6.
This latest statedump contains a lot of gf_fuse_mt_invalidate_node_t
objects in fuse. Looking at the code I see they are used to send
invalidations to kernel fuse, however this is done in a separate thread
that writes a log message when it exits. On the system I'm seeing the
memory leak, I can see that message in the log files:
[2016-01-18 23:04:55.384873] I [fuse-bridge.c:3875:notify_kernel_loop]
0-glusterfs-fuse: kernel notifier loop terminated
But the volume is still working at this moment, so any future inode
invalidations will leak memory because it was this thread that should
release it.
Can you check if you also see this message in the mount log ?
It seems that this thread terminates if write returns any error
different than ENOENT. I'm not sure if there could be any other error
that can cause this.
Xavi
On 20/01/16 00:13, Oleksandr Natalenko wrote:
Here is another RAM usage stats and statedump of GlusterFS mount approaching
to just another OOM:
===
root 32495 1.4 88.3 4943868 1697316 ? Ssl Jan13 129:18 /usr/sbin/
glusterfs --volfile-server=server.example.com --volfile-id=volume /mnt/volume
===
https://gist.github.com/86198201c79e927b46bd
1.6G of RAM just for almost idle mount (we occasionally store Asterisk
recordings there). Triple OOM for 69 days of uptime.
Any thoughts?
On середа, 13 січня 2016 р. 16:26:59 EET Soumya Koduri wrote:
kill -USR1
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel