Hi Oleksandr,
You are right. The description should have said it as the limit on the number of inodes in the lru list of the inode cache. I have sent a patch for that.
Regards,
Raghavendra Bhat
On Thu, Sep 24, 2015 at 1:44 PM, Oleksandr Natalenko <oleksandr@xxxxxxxxxxxxxx> wrote:
I've checked statedump of volume in question and haven't found lots of iobuf as mentioned in that bugreport.
However, I've noticed that there are lots of LRU records like this:
===
[conn.1.bound_xl./bricks/r6sdLV07_vd0_mail/mail.lru.1]
gfid=c4b29310-a19d-451b-8dd1-b3ac2d86b595
nlookup=1
fd-count=0
ref=0
ia_type=1
===
In fact, there are 16383 of them. I've checked "gluster volume set help" in order to find something LRU-related and have found this:
===
Option: network.inode-lru-limit
Default Value: 16384
Description: Specifies the maximum megabytes of memory to be used in the inode cache.
===
Is there error in description stating "maximum megabytes of memory"? Shouldn't it mean "maximum amount of LRU records"? If no, is that true, that inode cache could grow up to 16 GiB for client, and one must lower network.inode-lru-limit value?
Another thought: we've enabled write-behind, and the default write-behind-window-size value is 1 MiB. So, one may conclude that with lots of small files written, write-behind buffer could grow up to inode-lru-limit×write-behind-window-size=16 GiB? Who could explain that to me?
24.09.2015 10:42, Gabi C write:
oh, my bad...
coulb be this one?
https://bugzilla.redhat.com/show_bug.cgi?id=1126831 [2]
Anyway, on ovirt+gluster w I experienced similar behavior...
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users