em, the data set is complicated. There are many big files as well as small files. There is about 50T data in gluster server, so I do not know how many files in the dataset exactly. Can the inode cache consume so huge memory? How can I limit the inode cache?
ps:
$ grep itable glusterdump.109182.dump.1533730324 | grep lru | wc -l
191728
When I dump the process info, the fuse process consumed about 30G memory.
On 08/9/2018 13:13,Raghavendra Gowdappa<rgowdapp@xxxxxxxxxx> wrote:
On Thu, Aug 9, 2018 at 10:36 AM, huting3 <huting3@xxxxxxxxxxxxxxxx> wrote:grep count will ouput nothing, so I grep size, the results are:$ grep itable glusterdump.109182.dump.1533730324 | grep lru | grep size xlator.mount.fuse.itable.lru_size=191726 Kernel is holding too many inodes in its cache. What's the data set like? Do you've too many directories? How many files do you have?$ grep itable glusterdump.109182.dump.1533730324 | grep active | grep size xlator.mount.fuse.itable.active_size=17
huting3@xxxxxxxxxxxxxxxx
On 08/9/2018 12:36,Raghavendra Gowdappa<rgowdapp@xxxxxxxxxx> wrote:# grep itable <statedump> | grep active | grep countCan you get the output of following cmds?# grep itable <statedump> | grep lru | grep countOn Thu, Aug 9, 2018 at 9:25 AM, huting3 <huting3@xxxxxxxxxxxxxxxx> wrote:Yes, I got the dump file and found there are many huge num_allocs just like following:I found memusage of 4 variable types are extreamly huge.[protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]size=47202352num_allocs=2030212max_size=47203074max_num_allocs=2030235total_allocs=26892201[protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]size=24362448num_allocs=2030204max_size=24367560max_num_allocs=2030226total_allocs=17830860[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]size=2497947552num_allocs=4578229max_size=2459135680max_num_allocs=7123206total_allocs=41635232[mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]size=4038730976num_allocs=1max_size=4294962264max_num_allocs=37
huting3@xxxxxxxxxxxxxxxx
On 08/9/2018 11:36,Raghavendra Gowdappa<rgowdapp@xxxxxxxxxx> wrote:On Thu, Aug 9, 2018 at 8:55 AM, huting3 <huting3@xxxxxxxxxxxxxxxx> wrote:Hi expert:I meet a problem when I use glusterfs. The problem is that the fuse client consumes huge memory when write a lot of files(>million) to the gluster, at last leading to killed by OS oom. The memory the fuse process consumes can up to 100G! I wonder if there are memory leaks in the gluster fuse process, or some other causes.Can you get statedump of fuse process consuming huge memory?My gluster version is 3.13.2, the gluster volume info is listed as following:Volume Name: gv0Type: Distributed-ReplicateVolume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0 Status: StartedSnapshot Count: 0Number of Bricks: 19 x 3 = 57Transport-type: tcpBricks:Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0 Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0 Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0 Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0 Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0 Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0 Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0 Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0 Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0 Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0 Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0 Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0 Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0 Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0 Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0 Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0 Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0 Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0 Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0 Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0 Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0 Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0 Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0 Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0 Brick25: dl26.dg.163.org:/glusterfs_brick/brick2/gv0 Brick26: dl27.dg.163.org:/glusterfs_brick/brick2/gv0 Brick27: dl28.dg.163.org:/glusterfs_brick/brick2/gv0 Brick28: dl29.dg.163.org:/glusterfs_brick/brick2/gv0 Brick29: dl30.dg.163.org:/glusterfs_brick/brick2/gv0 Brick30: dl31.dg.163.org:/glusterfs_brick/brick2/gv0 Brick31: dl32.dg.163.org:/glusterfs_brick/brick2/gv0 Brick32: dl33.dg.163.org:/glusterfs_brick/brick2/gv0 Brick33: dl34.dg.163.org:/glusterfs_brick/brick2/gv0 Brick34: dl23.dg.163.org:/glusterfs_brick/brick3/gv0 Brick35: dl24.dg.163.org:/glusterfs_brick/brick3/gv0 Brick36: dl25.dg.163.org:/glusterfs_brick/brick3/gv0 Brick37: dl26.dg.163.org:/glusterfs_brick/brick3/gv0 Brick38: dl27.dg.163.org:/glusterfs_brick/brick3/gv0 Brick39: dl28.dg.163.org:/glusterfs_brick/brick3/gv0 Brick40: dl29.dg.163.org:/glusterfs_brick/brick3/gv0 Brick41: dl30.dg.163.org:/glusterfs_brick/brick3/gv0 Brick42: dl31.dg.163.org:/glusterfs_brick/brick3/gv0 Brick43: dl32.dg.163.org:/glusterfs_brick/brick3/gv0 Brick44: dl33.dg.163.org:/glusterfs_brick/brick3/gv0 Brick45: dl34.dg.163.org:/glusterfs_brick/brick3/gv0 Brick46: dl0.dg.163.org:/glusterfs_brick/brick1/gv0 Brick47: dl1.dg.163.org:/glusterfs_brick/brick1/gv0 Brick48: dl2.dg.163.org:/glusterfs_brick/brick1/gv0 Brick49: dl3.dg.163.org:/glusterfs_brick/brick1/gv0 Brick50: dl5.dg.163.org:/glusterfs_brick/brick1/gv0 Brick51: dl6.dg.163.org:/glusterfs_brick/brick1/gv0 Brick52: dl9.dg.163.org:/glusterfs_brick/brick1/gv0 Brick53: dl10.dg.163.org:/glusterfs_brick/brick1/gv0 Brick54: dl11.dg.163.org:/glusterfs_brick/brick1/gv0 Brick55: dl12.dg.163.org:/glusterfs_brick/brick1/gv0 Brick56: dl13.dg.163.org:/glusterfs_brick/brick1/gv0 Brick57: dl14.dg.163.org:/glusterfs_brick/brick1/gv0 Options Reconfigured:performance.cache-size: 10GBperformance.parallel-readdir: onperformance.readdir-ahead: onnetwork.inode-lru-limit: 200000performance.md-cache-timeout: 600performance.cache-invalidation: on performance.stat-prefetch: onfeatures.cache-invalidation-timeout: 600 features.cache-invalidation: onfeatures.inode-quota: offfeatures.quota: offcluster.quorum-reads: oncluster.quorum-count: 2cluster.quorum-type: fixedtransport.address-family: inetnfs.disable: onperformance.client-io-threads: offcluster.server-quorum-ratio: 51%
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-devel