Hi Oleksandr, On 01/02/16 09:09, Oleksandr Natalenko wrote:
Wait. It seems to be my bad. Before unmounting I do drop_caches (2), and glusterfs process CPU usage goes to 100% for a while.
That's the expected behavior after applying the nlookup count patch. As it's configured now, gluster won't release memory until the kernel requests it. Forcing a drop caches causes this to be made massively, consuming a lot of CPU. On normal circumstances, when memory is low, kernel will start releasing cached entries. This will include requests to gluster to release memory associated with those inodes in an incremental way as it's needed.
I haven't waited for it to drop to 0%, and instead perform unmount. It seems glusterfs is purging inodes and that's why it uses 100% of CPU. I've re-tested it, waiting for CPU usage to become normal, and got no leaks.
I've made the same experiment and I ended with only 4 inodes still in use (probably the root directory and some other special entries) after having had several tens of thousands.
Xavi
Will verify this once again and report more. BTW, if that works, how could I limit inode cache for FUSE client? I do not want it to go beyond 1G, for example, even if I have 48G of RAM on my server. 01.02.2016 09:54, Soumya Koduri написав:On 01/31/2016 03:05 PM, Oleksandr Natalenko wrote:Unfortunately, this patch doesn't help. RAM usage on "find" finish is ~9G. Here is statedump before drop_caches: https://gist.github.com/ fc1647de0982ab447e20[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage] size=706766688 num_allocs=2454051And after drop_caches: https://gist.github.com/5eab63bc13f78787ed19[mount/fuse.fuse - usage-type gf_common_mt_inode_ctx memusage] size=550996416 num_allocs=1913182 There isn't much significant drop in inode contexts. One of the reasons could be because of dentrys holding a refcount on the inodes which shall result in inodes not getting purged even after fuse_forget. pool-name=fuse:dentry_t hot-count=32761 if '32761' is the current active dentry count, it still doesn't seem to match up to inode count. Thanks, SoumyaAnd here is Valgrind output: https://gist.github.com/2490aeac448320d98596 On субота, 30 січня 2016 р. 22:56:37 EET Xavier Hernandez wrote:There's another inode leak caused by an incorrect counting of lookups on directory reads. Here's a patch that solves the problem for 3.7: http://review.gluster.org/13324 Hopefully with this patch the memory leaks should disapear. Xavi On 29.01.2016 19:09, Oleksandr Natalenko wrote:Here is intermediate summary of current memoryleaks in FUSE clientinvestigation. I use GlusterFS v3.7.6release with the following patches:===Kaleb S KEITHLEY (1):fuse: use-after-free fix in fuse-bridge, revisitedPranith Kumar K(1):mount/fuse: Fix use-after-free crashSoumya Koduri (3):gfapi: Fix inode nlookup countsinode: Retire the inodes from the lrulist in inode_table_destroyupcall: free the xdr* allocations === With those patches we got API leaks fixed (I hope, brief tests showthat) andgot rid of "kernel notifier loop terminated" message.Nevertheless, FUSEclient still leaks. I have several testvolumes with several million of small files (100K…2M inaverage). Ido 2 types of FUSE client testing:1) find /mnt/volume -type d 2)rsync -av -H /mnt/source_volume/* /mnt/target_volume/And mostup-to-date results are shown below:=== find /mnt/volume -type d===Memory consumption: ~4GStatedump:https://gist.github.com/10cde83c63f1b4f1dd7aValgrind:https://gist.github.com/097afb01ebb2c5e9e78dI guess,fuse-bridge/fuse-resolve. related.=== rsync -av -H/mnt/source_volume/* /mnt/target_volume/ ===Memory consumption:~3.3...4GStatedump (target volume):https://gist.github.com/31e43110eaa4da663435Valgrind (target volume):https://gist.github.com/f8e0151a6878cacc9b1aI guess,DHT-related.Give me more patches to test :)._______________________________________________Gluster-devel mailinglistGluster-devel@xxxxxxxxxxxhttp://www.gluster.org/mailman/listinfo/gluster-devel_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users