Re: [Gluster-devel] GlusterFS FUSE client leaks summary — part I

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here goes the report on DHT-related leaks patch ("rsync" test).

RAM usage before drop_caches: [1]
Statedump before drop_caches: [2]
RAM usage after drop_caches: [3]
Statedump after drop_caches: [4]
Statedumps diff: [5]
Valgrind output: [6]

[1] https://gist.github.com/ca8d56834c14c4bfa98e
[2] https://gist.github.com/06dc910d7261750d486c
[3] https://gist.github.com/c482b170848a21b6e5f3
[4] https://gist.github.com/ed7f56336b4cbf39f7e8
[5] https://gist.github.com/f8597f34b56d949f7dcb
[6] https://gist.github.com/102fc2d2dfa2d2d179fa

I guess, the patch works.

29.01.2016 23:11, Vijay Bellur написав:
On 01/29/2016 01:09 PM, Oleksandr Natalenko wrote:
Here is intermediate summary of current memory leaks in FUSE client
investigation.

I use GlusterFS v3.7.6 release with the following patches:

===
Kaleb S KEITHLEY (1):
       fuse: use-after-free fix in fuse-bridge, revisited

Pranith Kumar K (1):
       mount/fuse: Fix use-after-free crash

Soumya Koduri (3):
       gfapi: Fix inode nlookup counts
inode: Retire the inodes from the lru list in inode_table_destroy
       upcall: free the xdr* allocations
===

With those patches we got API leaks fixed (I hope, brief tests show that) and got rid of "kernel notifier loop terminated" message. Nevertheless, FUSE
client still leaks.

I have several test volumes with several million of small files (100K…2M in
average). I do 2 types of FUSE client testing:

1) find /mnt/volume -type d
2) rsync -av -H /mnt/source_volume/* /mnt/target_volume/

And most up-to-date results are shown below:

=== find /mnt/volume -type d ===

Memory consumption: ~4G
Statedump: https://gist.github.com/10cde83c63f1b4f1dd7a
Valgrind: https://gist.github.com/097afb01ebb2c5e9e78d

I guess, fuse-bridge/fuse-resolve. related.

=== rsync -av -H /mnt/source_volume/* /mnt/target_volume/ ===

Memory consumption: ~3.3...4G
Statedump (target volume): https://gist.github.com/31e43110eaa4da663435
Valgrind (target volume): https://gist.github.com/f8e0151a6878cacc9b1a

I guess, DHT-related.

Give me more patches to test :).

Thank you as ever for your detailed reports!

This patch should help the dht leaks observed as part of
dht_do_rename() in valgrind logs of target volume.

http://review.gluster.org/#/c/13322/

Can you please verify if this indeed helps?

Regards,
Vijay
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux