hi Poornima,
I don't really have any advice, how you could reproduce this issue also
I don't have coredump (the process killed after oom issue).
I will see, what can I do.
I set the two settings you wrote.
Cheers,
tamas
On 08/04/2014 08:36 AM, Poornima Gurusiddaiah wrote:
Hi,
From the statedump it is evident that the iobufs are leaking.
Also the hot count of the pool-name=w-vol-io-cache:rbthash_entry_t is 10053, implies io-cache xlator could be the cause of the leak.
From the logs, it looks like, quick-read performance xlator is calling iobuf_free with NULL pointers, implies quick-read could be leaking iobufs as well.
As a temperory solution, could you disable io-cache and/or quick-read and see if the leak still persists?
$gluster volume set io-cache off
$gluster volume set quick-read off
This may reduce the performance to certain extent.
For further debugging, could you provide the core dump or steps to reproduce if avaiable?
Regards,
Poornima
----- Original Message -----
From: "Tamas Papp" <tompos@xxxxxxxxxxxxx>
To: "Poornima Gurusiddaiah" <pgurusid@xxxxxxxxxx>
Cc: Gluster-users@xxxxxxxxxxx
Sent: Sunday, August 3, 2014 10:33:17 PM
Subject: Re: high memory usage of mount
On 07/31/2014 09:17 AM, Tamas Papp wrote:
On 07/31/2014 09:02 AM, Poornima Gurusiddaiah wrote:
Hi,
hi,
Can you provide the statedump of the process, it can be obtained as
follows:
$ gluster --print-statedumpdir #create this directory if it doesn't
exist.
$ kill -USR1 <pid-of-glusterfs-process> #generates state dump.
http://rtfm.co.hu/glusterdump.2464.dump.1406790562.zip
Also, xporting Gluster via Samba-VFS-plugin method is preferred over
Fuse mount export. For more details refer to:
http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/
When I tried it about half year ago it didn't work properly. Clients
lost mounts, access errors etc.
But I will give it a try, though it's not included in ubuntu's samba
AFAIK.
Thank you,
tamas
ps. I forget to mention, I can see this issue only one node. The rest
of nodes are fine.
hi Poornima,
Do you have idea, what's going on here?
Thanks,
tamas
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users