Re: [Gluster-users] Memory leak in GlusterFS FUSE client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here are two consecutive statedumps of brick in question memory usage [1] [2]. glusterfs client process went from ~630 MB to ~1350 MB of memory usage in less than one hour.

Volume options:

===
cluster.lookup-optimize: on
cluster.readdir-optimize: on
client.event-threads: 4
network.inode-lru-limit: 4096
server.event-threads: 8
performance.client-io-threads: on
storage.linux-aio: on
performance.write-behind-window-size: 4194304
performance.stat-prefetch: on
performance.quick-read: on
performance.read-ahead: on
performance.flush-behind: on
performance.write-behind: on
performance.io-thread-count: 4
performance.cache-max-file-size: 1048576
performance.cache-size: 33554432
performance.readdir-ahead: on
===

I observe such a behavior on similar volumes where millions of files are stored. The volume in question holds ~11M of small files (mail storage).

So, memleak persists. Had to switch to NFS temporarily :(.

Any idea?

[1] https://gist.github.com/46697b70ffe193fa797e
[2] https://gist.github.com/3a968ca909bfdeb31cca

28.09.2015 14:31, Raghavendra Bhat написав:
Hi Oleksandr,

You are right. The description should have said it as the limit on the
number of inodes in the lru list of the inode cache. I have sent a
patch for that.
http://review.gluster.org/#/c/12242/ [3]

Regards,
Raghavendra Bhat

On Thu, Sep 24, 2015 at 1:44 PM, Oleksandr Natalenko
<oleksandr@xxxxxxxxxxxxxx> wrote:

I've checked statedump of volume in question and haven't found lots
of iobuf as mentioned in that bugreport.

However, I've noticed that there are lots of LRU records like this:

===
[conn.1.bound_xl./bricks/r6sdLV07_vd0_mail/mail.lru.1]
gfid=c4b29310-a19d-451b-8dd1-b3ac2d86b595
nlookup=1
fd-count=0
ref=0
ia_type=1
===

In fact, there are 16383 of them. I've checked "gluster volume set
help" in order to find something LRU-related and have found this:

===
Option: network.inode-lru-limit
Default Value: 16384
Description: Specifies the maximum megabytes of memory to be used in
the inode cache.
===

Is there error in description stating "maximum megabytes of memory"?
Shouldn't it mean "maximum amount of LRU records"? If no, is that
true, that inode cache could grow up to 16 GiB for client, and one
must lower network.inode-lru-limit value?

Another thought: we've enabled write-behind, and the default
write-behind-window-size value is 1 MiB. So, one may conclude that
with lots of small files written, write-behind buffer could grow up
to inode-lru-limit×write-behind-window-size=16 GiB? Who could
explain that to me?

24.09.2015 10:42, Gabi C write:

oh, my bad...
coulb be this one?

https://bugzilla.redhat.com/show_bug.cgi?id=1126831 [1] [2]
Anyway, on ovirt+gluster w I experienced similar behavior...

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel [2]



Links:
------
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1126831
[2] http://www.gluster.org/mailman/listinfo/gluster-devel
[3] http://review.gluster.org/#/c/12242/
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux