Re: [Gluster-users] Memory leak in GlusterFS FUSE client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here are the results of "rsync" test. I've got 2 volumes — source and target — 
performing multiple files rsyncing from one volume to another.

Source volume:

===
root     22259  3.5  1.5 1204200 771004 ?      Ssl  Jan23 109:42 /usr/sbin/
glusterfs --volfile-server=glusterfs.example.com --volfile-id=source /mnt/net/
glusterfs/source
===

One may see that memory consumption of source volume is not that high as with 
"find" test. Here is source volume client statedump: https://gist.github.com/
ef5b798859219e739aeb

Here is source volume info: https://gist.github.com/3d2f32e7346df9333004

Target volume:

===
root     22200 23.8  6.9 3983676 3456252 ?     Ssl  Jan23 734:57 /usr/sbin/
glusterfs --volfile-server=glusterfs.example.com --volfile-id=target /mnt/net/
glusterfs/target
===

Here is target volume info: https://gist.github.com/c9de01168071575b109e

Target volume RAM consumption is very high (more than 3 GiBs). Here is client 
statedump too: https://gist.github.com/31e43110eaa4da663435

I see huge DHT-related memory usage, e.g.:

===
[cluster/distribute.asterisk_records-dht - usage-type gf_common_mt_mem_pool 
memusage]
size=725575592
num_allocs=7552486
max_size=725575836
max_num_allocs=7552489
total_allocs=90843958

[cluster/distribute.asterisk_records-dht - usage-type gf_common_mt_char 
memusage]
size=586404954
num_allocs=7572836
max_size=586405157
max_num_allocs=7572839
total_allocs=80463096
===

Ideas?

On понеділок, 25 січня 2016 р. 02:46:32 EET Oleksandr Natalenko wrote:
> Also, I've repeated the same "find" test again, but with glusterfs process
> launched under valgrind. And here is valgrind output:
> 
> https://gist.github.com/097afb01ebb2c5e9e78d
> 
> On неділя, 24 січня 2016 р. 09:33:00 EET Mathieu Chateau wrote:
> > Thanks for all your tests and times, it looks promising :)
> > 
> > 
> > Cordialement,
> > Mathieu CHATEAU
> > http://www.lotp.fr
> > 
> > 2016-01-23 22:30 GMT+01:00 Oleksandr Natalenko <oleksandr@xxxxxxxxxxxxxx>:
> > > OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the
> > > following
> > > patches:
> > > 
> > > ===
> > > 
> > > Kaleb S KEITHLEY (1):
> > >       fuse: use-after-free fix in fuse-bridge, revisited
> > > 
> > > Pranith Kumar K (1):
> > >       mount/fuse: Fix use-after-free crash
> > > 
> > > Soumya Koduri (3):
> > >       gfapi: Fix inode nlookup counts
> > >       inode: Retire the inodes from the lru list in inode_table_destroy
> > >       upcall: free the xdr* allocations
> > > 
> > > ===
> > > 
> > > I run rsync from one GlusterFS volume to another. While memory started
> > > from
> > > under 100 MiBs, it stalled at around 600 MiBs for source volume and does
> > > not
> > > grow further. As for target volume it is ~730 MiBs, and that is why I'm
> > > going
> > > to do several rsync rounds to see if it grows more (with no patches bare
> > > 3.7.6
> > > could consume more than 20 GiBs).
> > > 
> > > No "kernel notifier loop terminated" message so far for both volumes.
> > > 
> > > Will report more in several days. I hope current patches will be
> > > incorporated
> > > into 3.7.7.
> > > 
> > > On пʼятниця, 22 січня 2016 р. 12:53:36 EET Kaleb S. KEITHLEY wrote:
> > > > On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote:
> > > > > On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote:
> > > > >> I presume by this you mean you're not seeing the "kernel notifier
> > > > >> loop
> > > > >> terminated" error in your logs.
> > > > > 
> > > > > Correct, but only with simple traversing. Have to test under rsync.
> > > > 
> > > > Without the patch I'd get "kernel notifier loop terminated" within a
> > > > few
> > > > minutes of starting I/O.  With the patch I haven't seen it in 24 hours
> > > > of beating on it.
> > > > 
> > > > >> Hmmm.  My system is not leaking. Last 24 hours the RSZ and VSZ are
> > > 
> > > > >> stable:
> > > http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longe
> > > v
> > > 
> > > > >> ity /client.out
> > > > > 
> > > > > What ops do you perform on mounted volume? Read, write, stat? Is
> > > > > that
> > > > > 3.7.6 + patches?
> > > > 
> > > > I'm running an internally developed I/O load generator written by a
> > > > guy
> > > > on our perf team.
> > > > 
> > > > it does, create, write, read, rename, stat, delete, and more.
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux