Re: xlator.mount.fuse.itable.lru_limit=0 at client fuse process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Thu, Oct 18, 2018 at 2:30 PM Yanfei Wang <backyes@xxxxxxxxx> wrote:
Dear, Developers,


Many tuning and benchmark on different gluster release, 3.12.15, 4.1,
3.11, the client fuse process will eat hundreds of GB RAM memory with
256G memory system, then OOM and was killed at some time.

To consult many google search, fuse related paper, benchmark, testing,
 We can not eventually determine what's the reason why memory grows
larger and larger. We make sure,

xlator.mount.fuse.itable.lru_limit=0 at client fuse process could give
us some clues.


There is no 'lru_limit' implemented on client side as of now! We are trying to get that feature done for glusterfs-6. Try pruning inode table by forcing forgets (by dropping cache)

echo 3 | sudo tee /proc/sys/vm/drop_caches

Meantime, some questions on workload, are you having 100s of millions of files? Or is it lesser files with bigger size?

 
I guess, gluster fuse process will cache files inode at  client end,
and never kick old inode eventually.  However, I do not know if this
is design issue, or some tradoff, or bugs.

my configuration:

Options Reconfigured:
performance.write-behind-window-size: 256MB
performance.write-behind: on
cluster.lookup-optimize: on
transport.listen-backlog: 1024
performance.io-thread-count: 6
performance.cache-size: 10GB
performance.quick-read: on
performance.parallel-readdir: on
network.inode-lru-limit: 50000
cluster.quorum-reads: on
cluster.quorum-count: 2
cluster.quorum-type: fixed
cluster.server-quorum-type: server
client.event-threads: 4
performance.stat-prefetch: on
performance.md-cache-timeout: 600
cluster.min-free-disk: 5%
performance.flush-behind: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
cluster.server-quorum-ratio: 51%

We extremely hope some replies from community,  extremely.  Even
telling us the trouble can not be resolved for design reason is GREAT
for us.

Thanks a lot.

- Fei
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel




--
Amar Tumballi (amarts)
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux