Re: high memory usage of mount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just an update, the settings below did not help for me.

Current settings:

Volume Name: w-vol
Type: Distribute
Volume ID: 89e31546-cc2e-4a27-a448-17befda04726
Status: Started
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: gl0:/mnt/brick1/export
Brick2: gl1:/mnt/brick1/export
Brick3: gl2:/mnt/brick1/export
Brick4: gl3:/mnt/brick1/export
Brick5: gl4:/mnt/brick1/export
Options Reconfigured:
nfs.mount-udp: on
nfs.addr-namelookup: off
nfs.ports-insecure: on
nfs.port: 2049
cluster.stripe-coalesce: on
nfs.disable: off
performance.flush-behind: on
performance.io-thread-count: 64
performance.quick-read: off
performance.stat-prefetch: on
performance.io-cache: off
performance.write-behind: on
performance.read-ahead: on
performance.write-behind-window-size: 4MB
performance.cache-refresh-timeout: 1
performance.cache-size: 4GB
network.frame-timeout: 60
performance.cache-max-file-size: 1GB


Cheers,
tamas

On 08/04/2014 09:22 AM, Tamas Papp wrote:
hi Poornima,

I don't really have any advice, how you could reproduce this issue also I don't have coredump (the process killed after oom issue).

I will see, what can I do.


I set the two settings you wrote.


Cheers,
tamas

On 08/04/2014 08:36 AM, Poornima Gurusiddaiah wrote:
Hi,

 From the statedump it is evident that the iobufs are leaking.
Also the hot count of the pool-name=w-vol-io-cache:rbthash_entry_t is 10053, implies io-cache xlator could be the cause of the leak. From the logs, it looks like, quick-read performance xlator is calling iobuf_free with NULL pointers, implies quick-read could be leaking iobufs as well.

As a temperory solution, could you disable io-cache and/or quick-read and see if the leak still persists?

$gluster volume set io-cache off
$gluster volume set quick-read off

This may reduce the performance to certain extent.

For further debugging, could you provide the core dump or steps to reproduce if avaiable?

Regards,
Poornima

----- Original Message -----
From: "Tamas Papp" <tompos@xxxxxxxxxxxxx>
To: "Poornima Gurusiddaiah" <pgurusid@xxxxxxxxxx>
Cc: Gluster-users@xxxxxxxxxxx
Sent: Sunday, August 3, 2014 10:33:17 PM
Subject: Re:  high memory usage of mount


On 07/31/2014 09:17 AM, Tamas Papp wrote:
On 07/31/2014 09:02 AM, Poornima Gurusiddaiah wrote:
Hi,
hi,

Can you provide the statedump of the process, it can be obtained as
follows:
$ gluster --print-statedumpdir  #create this directory if it doesn't
exist.
$ kill -USR1 <pid-of-glusterfs-process>  #generates state dump.
http://rtfm.co.hu/glusterdump.2464.dump.1406790562.zip

Also, xporting Gluster via Samba-VFS-plugin method is preferred over
Fuse mount export. For more details refer to:
http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/

When I tried it about half year ago it didn't work properly. Clients
lost mounts, access errors etc.

But I will give it a try, though it's not included in ubuntu's samba
AFAIK.


Thank you,
tamas

ps. I forget to mention, I can see this issue only one node. The rest
of nodes are fine.
hi Poornima,

Do you have  idea, what's going on here?

Thanks,
tamas

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux