Mike Terzo wrote:
I have a very simple glusterfs config setup. I have 2 tcp hosts configured as:
volume LABVOL
type cluster/replicate
option block-size 4MB
subvolumes gfs1 gfs2
end-volume
I have io-cache set to 16MBs (i know it's low, but i'm debugging
memory usage).
I'm using io-threads as well, and have that set to 2, again debugging
memory usage.
I'm copying 100's of thousands of files and directories to my
glusterfs mount point on a client box. After about 2 days, the
glusterfs process on the client had allocated 10G of memory.
I recompiled gluster (3.0.2) with -O0 -g and was able to reproduce the
memory growth. I ran the gluster mount inside valgrind to get an idea
of where all the memory was going. I found that in fuse-bridge.c that
the inode table was getting created with a 0 value for the lru_limit,
which is used to determine how many inodes to keep in the linked list
(inode_table_prune). This is what's blowing out the memory on my
installation. Is there a reason lru_limit is hard set to 0 when
creating the inode_table for the client?
Thats because the kernel FUSE module is supposed to tell us when to free
the inodes inside GlusterFS and in this case it does not want them
freed. We know about this problem and there is work underway to change
this behaviour in the FUSE module.
To force kernel to free up inodes, try this as root:
$ echo 2 > /proc/sys/vm/drop_caches
If that doesnt work, try:
$ sync
$ echo 2 > /proc/sys/vm/drop_caches
Beware that sync will force pending kernel buffers to disk so it might
interfere with ongoing disk IO.
BTW, how much memory do you have on this system where GlusterFS consumed
10G?
-Shehjar
thanks
--mike terzo
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel