Re: [cephfs][ceph-fuse] cache size or memory leak?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The output of status command of fuse daemon:
"dentry_count": 128966,
"dentry_pinned_count": 128965,
"inode_count": 409696,
I saw the pinned dentry is nearly the same as dentry.
So I enabled debug log(debug client = 20/20) and  read  Client.cc source code in general. I found that an entry will not be trimed if it is pinned.
But how can I unpin dentrys?

On Wed, Apr 29, 2015 at 12:19 PM Dexter Xiong <dxtxiong@xxxxxxxxx> wrote:
I tried set client cache size = 100, but it doesn't solve the problem.
I tested ceph-fuse with kernel version 3.13.0-24 3.13.0-49 and 3.16.0-34.



On Tue, Apr 28, 2015 at 7:39 PM John Spray <john.spray@xxxxxxxxxx> wrote:


On 28/04/2015 06:55, Dexter Xiong wrote:
> Hi,
>     I've deployed a small hammer cluster 0.94.1. And I mount it via
> ceph-fuse on Ubuntu 14.04. After several hours I found that the
> ceph-fuse process crashed. The end is the crash log from
> /var/log/ceph/ceph-client.admin.log. The memory cost of ceph-fuse
> process was huge(more than 4GB) when it crashed.
>     Then I did some test and found these actions will increase memory
> cost of ceph-fuse rapidly and the memory cost never seem to decrease:
>
>   * rsync command to sync small files(rsync -a /mnt/some_small /srv/ceph)
>   * chown command/ chmod command(chmod 775 /srv/ceph -R)
>
> But chown/chmod command on accessed files will not increase the memory
> cost.
> It seems that ceph-fuse caches the file nodes but never releases them.
> I don't know if there is an option to control the cache size. I
> set mds cache size = 2147483647 option to improve the performance of
> mds, and I tried to set mds cache size = 1000 at client side but it
> doesn't effect the result.

The setting for client-side cache limit is "client cache size", default
is 16384

What kernel version are you using on the client?  There have been some
issues with cache trimming vs. fuse in recent kernels, but we thought we
had workarounds in place...

Cheers,
John

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux