inode list memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a very simple glusterfs config setup. I have 2 tcp hosts configured as:

volume LABVOL
	type cluster/replicate
	option block-size 4MB
	subvolumes gfs1 gfs2
end-volume

I have io-cache set to 16MBs (i know it's low, but i'm debugging
memory usage).
I'm using io-threads as well, and have that set to 2, again debugging
memory usage.

I'm copying 100's of thousands of files and directories to my
glusterfs mount point on a client box.  After about 2 days, the
glusterfs process on the client had allocated 10G of memory.

I recompiled gluster (3.0.2) with -O0 -g and was able to reproduce the
memory growth.  I ran the gluster mount inside valgrind to get an idea
of where all the memory was going.  I found that in fuse-bridge.c that
the inode table was getting created with a 0 value for the lru_limit,
which is used to determine how many inodes to keep in the linked list
(inode_table_prune).  This is what's blowing out the memory on my
installation.  Is there a reason lru_limit is hard set to 0 when
creating the inode_table for the client?

thanks
--mike terzo




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux