Re: [Gluster-devel] Memory leak in GlusterFS FUSE client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I perform the tests using 1) rsync (massive copy of millions of files); 2) 
find (simple tree traversing).

To check if memory leak happens, I use find tool. I've performed two 
traversing (w/ and w/o fopen-keep-cache=off) with remount between them, but I 
didn't encounter "kernel notifier loop terminated" message during both 
traversing as well as before unmounting volume. Nevertheless, memory still 
leaks (at least up to 3 GiB in each case), so I believe invalidation requests 
are not the case.

I've also checked logs for the volume where I do rsync, and the message 
"kernel notifier loop terminated" happens somewhere in the middle of rsyncing, 
not before unmounting. But the memory starts leaking on rsync start as well, 
not just after "kernel notifier loop terminated" message. So, I believe, 
"kernel notifier loop terminated" is not the case again.

Also, I've tried to implement quick and dirty GlusterFS FUSE client using API 
(see https://github.com/pfactum/xglfs), and with latest patches from this 
thread (http://review.gluster.org/#/c/13096/, http://review.gluster.org/#/c/
13125/ and http://review.gluster.org/#/c/13232/) my FUSE client does not leak 
on tree traversing. So, I believe, this should be related to GlusterFS FUSE 
implementation.

How could I debug memory leak better?

On четвер, 21 січня 2016 р. 10:32:32 EET Xavier Hernandez wrote:
> If this message appears way before the volume is unmounted, can you try
> to start the volume manually using this command and repeat the tests ?
> 
> glusterfs --fopen-keep-cache=off --volfile-server=<server>
> --volfile-id=/<volume> <mount point>
> 
> This will prevent invalidation requests to be sent to the kernel, so
> there shouldn't be any memory leak even if the worker thread exits
> prematurely.
> 
> If that solves the problem, we could try to determine the cause of the
> premature exit and solve it.
> 
> Xavi
> 
> On 20/01/16 10:08, Oleksandr Natalenko wrote:
> > Yes, there are couple of messages like this in my logs too (I guess one
> > message per each remount):
> > 
> > ===
> > [2016-01-18 23:42:08.742447] I [fuse-bridge.c:3875:notify_kernel_loop] 0-
> > glusterfs-fuse: kernel notifier loop terminated
> > ===
> > 
> > On середа, 20 січня 2016 р. 09:51:23 EET Xavier Hernandez wrote:
> >> I'm seeing a similar problem with 3.7.6.
> >> 
> >> This latest statedump contains a lot of gf_fuse_mt_invalidate_node_t
> >> objects in fuse. Looking at the code I see they are used to send
> >> invalidations to kernel fuse, however this is done in a separate thread
> >> that writes a log message when it exits. On the system I'm seeing the
> >> memory leak, I can see that message in the log files:
> >> 
> >> [2016-01-18 23:04:55.384873] I [fuse-bridge.c:3875:notify_kernel_loop]
> >> 0-glusterfs-fuse: kernel notifier loop terminated
> >> 
> >> But the volume is still working at this moment, so any future inode
> >> invalidations will leak memory because it was this thread that should
> >> release it.
> >> 
> >> Can you check if you also see this message in the mount log ?
> >> 
> >> It seems that this thread terminates if write returns any error
> >> different than ENOENT. I'm not sure if there could be any other error
> >> that can cause this.
> >> 
> >> Xavi
> >> 
> >> On 20/01/16 00:13, Oleksandr Natalenko wrote:
> >>> Here is another RAM usage stats and statedump of GlusterFS mount
> >>> approaching to just another OOM:
> >>> 
> >>> ===
> >>> root     32495  1.4 88.3 4943868 1697316 ?     Ssl  Jan13 129:18
> >>> /usr/sbin/
> >>> glusterfs --volfile-server=server.example.com --volfile-id=volume
> >>> /mnt/volume ===
> >>> 
> >>> https://gist.github.com/86198201c79e927b46bd
> >>> 
> >>> 1.6G of RAM just for almost idle mount (we occasionally store Asterisk
> >>> recordings there). Triple OOM for 69 days of uptime.
> >>> 
> >>> Any thoughts?
> >>> 
> >>> On середа, 13 січня 2016 р. 16:26:59 EET Soumya Koduri wrote:
> >>>> kill -USR1
> >>> 
> >>> _______________________________________________
> >>> Gluster-devel mailing list
> >>> Gluster-devel@xxxxxxxxxxx
> >>> http://www.gluster.org/mailman/listinfo/gluster-devel


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux