Re: [Gluster-devel] GlusterFS v3.7.8 client leaks summary — part II

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote:
And "API" test.

I used custom API app [1] and did brief file manipulations through it
(create/remove/stat).

Then I performed drop_caches, finished API [2] and got the following
Valgrind log [3].

I believe there are still some leaks occurring in glfs_lresolve() call
chain.

glfs_fini() should have ideally destroyed all the inodes in the inode table. I shall try to use your app and check if anything is missed out.

Thanks,
Soumya


Soumya?

[1] https://github.com/pfactum/xglfs
[2] https://github.com/pfactum/xglfs/blob/master/xglfs_destroy.c#L30
[3] https://gist.github.com/aec72b6164a695cf2d44

11.02.2016 10:12, Oleksandr Natalenko написав:
And here goes "rsync" test results (v3.7.8 + two patches by Soumya).

2 volumes involved: source and target.

=== Common indicators ===

slabtop before drop_caches: [1]
slabtop after drop_caches: [2]

=== Source volume (less interesting part) ===

RAM usage before drop_caches: [3]
statedump before drop_caches: [4]
RAM usage after drop_caches: [5]
statedump after drop_caches: [6]

=== Target volume (most interesting part) ===

RAM usage before drop_caches: [7]
statedump before drop_caches: [8]
RAM usage after drop_caches: [9]
statedump after drop_caches: [10]
Valgrind output: [11]

=== Conclusion ===

Again, see no obvious leaks.

[1] https://gist.github.com/e72fd30a4198dd630299
[2] https://gist.github.com/78ef9eae3dc16fd79c1b
[3] https://gist.github.com/4ed75e8d6cb40a1369d8
[4] https://gist.github.com/20a75d32db76795b90d4
[5] https://gist.github.com/0772959834610dfdaf2d
[6] https://gist.github.com/a71684bd3745c77c41eb
[7] https://gist.github.com/2c9be083cfe3bffe6cec
[8] https://gist.github.com/0102a16c94d3d8eb82e3
[9] https://gist.github.com/23f057dc8e4b2902bba1
[10] https://gist.github.com/385bbb95ca910ec9766f
[11] https://gist.github.com/685c4d3e13d31f597722

10.02.2016 15:37, Oleksandr Natalenko написав:
Hi, folks.

Here go new test results regarding client memory leak.

I use v3.7.8 with the following patches:

===
Soumya Koduri (2):
      inode: Retire the inodes from the lru list in inode_table_destroy
      gfapi: Use inode_forget in case of handle objects
===

Those are the only 2 not merged yet.

So far, I've performed only "find" test, and here are the results:

RAM usage before drop_caches: [1]
statedump before drop_caches: [2]
slabtop before drop_caches: [3]
RAM usage after drop_caches: [4]
statedump after drop_caches: [5]
slabtop after drop_caches: [6]
Valgrind output: [7]

No leaks either via statedump or via valgrind. However, statedump
stats still suffer from integer overflow.

Next steps I'm going to take:

1) "rsync" test;
2) API test.

[1] https://gist.github.com/88d2fa95c28baeb2543f
[2] https://gist.github.com/4f3e93ff2db6e3cf4081
[3] https://gist.github.com/62791a2c4258041ba821
[4] https://gist.github.com/1d3ce95a493d054bbac2
[5] https://gist.github.com/fa855a2752d3691365a7
[6] https://gist.github.com/84e9e27d2a2e5ff5dc33
[7] https://gist.github.com/f35bd32a5159d3571d3a
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux