Re: gluster fuse comsumes huge memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



em, the data set is complicated. There are many big files as well as small files. There is about 50T data in gluster server, so I do not know how many files in the dataset exactly. Can the inode cache consume so huge memory? How can I limit the inode cache?

ps:
$ grep itable glusterdump.109182.dump.1533730324 | grep lru | wc -l
191728

When I dump the process info, the fuse process consumed about 30G memory.



On 08/9/2018 13:13Raghavendra Gowdappa<rgowdapp@xxxxxxxxxx> wrote:


On Thu, Aug 9, 2018 at 10:36 AM, huting3 <huting3@xxxxxxxxxxxxxxxx> wrote:
grep count will ouput nothing, so I grep size, the results are:

$ grep itable glusterdump.109182.dump.1533730324 | grep lru | grep size
xlator.mount.fuse.itable.lru_size=191726

Kernel is holding too many inodes in its cache. What's the data set like? Do you've too many directories? How many files do you have?


$ grep itable glusterdump.109182.dump.1533730324 | grep active | grep size
xlator.mount.fuse.itable.active_size=17


huting3
huting3@xxxxxxxxxxxxxxxx
签名由 网易邮箱大师 定制

On 08/9/2018 12:36Raghavendra Gowdappa<rgowdapp@xxxxxxxxxx> wrote:
Can you get the output of following cmds?

# grep itable <statedump> | grep lru | grep count

# grep itable <statedump> | grep active | grep count

On Thu, Aug 9, 2018 at 9:25 AM, huting3 <huting3@xxxxxxxxxxxxxxxx> wrote:
Yes, I got the dump file and found there are many huge num_allocs just like following:

I found memusage of 4 variable types are extreamly huge.

 [protocol/client.gv0-client-0 - usage-type gf_common_mt_char memusage]
size=47202352
num_allocs=2030212
max_size=47203074
max_num_allocs=2030235
total_allocs=26892201

[protocol/client.gv0-client-0 - usage-type gf_common_mt_memdup memusage]
size=24362448
num_allocs=2030204
max_size=24367560
max_num_allocs=2030226
total_allocs=17830860

[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage]
size=2497947552
num_allocs=4578229
max_size=2459135680
max_num_allocs=7123206
total_allocs=41635232

[mount/fuse.fuse - usage-type gf_fuse_mt_iov_base memusage]
size=4038730976
num_allocs=1
max_size=4294962264
max_num_allocs=37
total_allocs=150049981



huting3
huting3@xxxxxxxxxxxxxxxx
签名由 网易邮箱大师 定制

On 08/9/2018 11:36Raghavendra Gowdappa<rgowdapp@xxxxxxxxxx> wrote:


On Thu, Aug 9, 2018 at 8:55 AM, huting3 <huting3@xxxxxxxxxxxxxxxx> wrote:
Hi expert:

I meet a problem when I use glusterfs. The problem is that the fuse client consumes huge memory when write a   lot of files(>million) to the gluster, at last leading to killed by OS oom. The memory the fuse process consumes can up to 100G! I wonder if there are memory leaks in the gluster fuse process, or some other causes.

Can you get statedump of fuse process consuming huge memory?


My gluster version is 3.13.2, the gluster volume info is listed as following:

Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 4a6f96f8-b3fb-4550-bd19-e1a5dffad4d0
Status: Started
Snapshot Count: 0
Number of Bricks: 19 x 3 = 57
Transport-type: tcp
Bricks:
Brick1: dl20.dg.163.org:/glusterfs_brick/brick1/gv0
Brick2: dl21.dg.163.org:/glusterfs_brick/brick1/gv0
Brick3: dl22.dg.163.org:/glusterfs_brick/brick1/gv0
Brick4: dl20.dg.163.org:/glusterfs_brick/brick2/gv0
Brick5: dl21.dg.163.org:/glusterfs_brick/brick2/gv0
Brick6: dl22.dg.163.org:/glusterfs_brick/brick2/gv0
Brick7: dl20.dg.163.org:/glusterfs_brick/brick3/gv0
Brick8: dl21.dg.163.org:/glusterfs_brick/brick3/gv0
Brick9: dl22.dg.163.org:/glusterfs_brick/brick3/gv0
Brick10: dl23.dg.163.org:/glusterfs_brick/brick1/gv0
Brick11: dl24.dg.163.org:/glusterfs_brick/brick1/gv0
Brick12: dl25.dg.163.org:/glusterfs_brick/brick1/gv0
Brick13: dl26.dg.163.org:/glusterfs_brick/brick1/gv0
Brick14: dl27.dg.163.org:/glusterfs_brick/brick1/gv0
Brick15: dl28.dg.163.org:/glusterfs_brick/brick1/gv0
Brick16: dl29.dg.163.org:/glusterfs_brick/brick1/gv0
Brick17: dl30.dg.163.org:/glusterfs_brick/brick1/gv0
Brick18: dl31.dg.163.org:/glusterfs_brick/brick1/gv0
Brick19: dl32.dg.163.org:/glusterfs_brick/brick1/gv0
Brick20: dl33.dg.163.org:/glusterfs_brick/brick1/gv0
Brick21: dl34.dg.163.org:/glusterfs_brick/brick1/gv0
Brick22: dl23.dg.163.org:/glusterfs_brick/brick2/gv0
Brick23: dl24.dg.163.org:/glusterfs_brick/brick2/gv0
Brick24: dl25.dg.163.org:/glusterfs_brick/brick2/gv0
Brick25: dl26.dg.163.org:/glusterfs_brick/brick2/gv0
Brick26: dl27.dg.163.org:/glusterfs_brick/brick2/gv0
Brick27: dl28.dg.163.org:/glusterfs_brick/brick2/gv0
Brick28: dl29.dg.163.org:/glusterfs_brick/brick2/gv0
Brick29: dl30.dg.163.org:/glusterfs_brick/brick2/gv0
Brick30: dl31.dg.163.org:/glusterfs_brick/brick2/gv0
Brick31: dl32.dg.163.org:/glusterfs_brick/brick2/gv0
Brick32: dl33.dg.163.org:/glusterfs_brick/brick2/gv0
Brick33: dl34.dg.163.org:/glusterfs_brick/brick2/gv0
Brick34: dl23.dg.163.org:/glusterfs_brick/brick3/gv0
Brick35: dl24.dg.163.org:/glusterfs_brick/brick3/gv0
Brick36: dl25.dg.163.org:/glusterfs_brick/brick3/gv0
Brick37: dl26.dg.163.org:/glusterfs_brick/brick3/gv0
Brick38: dl27.dg.163.org:/glusterfs_brick/brick3/gv0
Brick39: dl28.dg.163.org:/glusterfs_brick/brick3/gv0
Brick40: dl29.dg.163.org:/glusterfs_brick/brick3/gv0
Brick41: dl30.dg.163.org:/glusterfs_brick/brick3/gv0
Brick42: dl31.dg.163.org:/glusterfs_brick/brick3/gv0
Brick43: dl32.dg.163.org:/glusterfs_brick/brick3/gv0
Brick44: dl33.dg.163.org:/glusterfs_brick/brick3/gv0
Brick45: dl34.dg.163.org:/glusterfs_brick/brick3/gv0
Brick46: dl0.dg.163.org:/glusterfs_brick/brick1/gv0
Brick47: dl1.dg.163.org:/glusterfs_brick/brick1/gv0
Brick48: dl2.dg.163.org:/glusterfs_brick/brick1/gv0
Brick49: dl3.dg.163.org:/glusterfs_brick/brick1/gv0
Brick50: dl5.dg.163.org:/glusterfs_brick/brick1/gv0
Brick51: dl6.dg.163.org:/glusterfs_brick/brick1/gv0
Brick52: dl9.dg.163.org:/glusterfs_brick/brick1/gv0
Brick53: dl10.dg.163.org:/glusterfs_brick/brick1/gv0
Brick54: dl11.dg.163.org:/glusterfs_brick/brick1/gv0
Brick55: dl12.dg.163.org:/glusterfs_brick/brick1/gv0
Brick56: dl13.dg.163.org:/glusterfs_brick/brick1/gv0
Brick57: dl14.dg.163.org:/glusterfs_brick/brick1/gv0
Options Reconfigured:
performance.cache-size: 10GB
performance.parallel-readdir: on
performance.readdir-ahead: on
network.inode-lru-limit: 200000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
features.inode-quota: off
features.quota: off
cluster.quorum-reads: on
cluster.quorum-count: 2
cluster.quorum-type: fixed
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.server-quorum-ratio: 51%




_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux