Re: [cephfs][ceph-fuse] cache size or memory leak?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 28/04/2015 06:55, Dexter Xiong wrote:
Hi,
I've deployed a small hammer cluster 0.94.1. And I mount it via ceph-fuse on Ubuntu 14.04. After several hours I found that the ceph-fuse process crashed. The end is the crash log from /var/log/ceph/ceph-client.admin.log. The memory cost of ceph-fuse process was huge(more than 4GB) when it crashed. Then I did some test and found these actions will increase memory cost of ceph-fuse rapidly and the memory cost never seem to decrease:

  * rsync command to sync small files(rsync -a /mnt/some_small /srv/ceph)
  * chown command/ chmod command(chmod 775 /srv/ceph -R)

But chown/chmod command on accessed files will not increase the memory cost.
It seems that ceph-fuse caches the file nodes but never releases them.
I don't know if there is an option to control the cache size. I set mds cache size = 2147483647 option to improve the performance of mds, and I tried to set mds cache size = 1000 at client side but it doesn't effect the result.

The setting for client-side cache limit is "client cache size", default is 16384

What kernel version are you using on the client? There have been some issues with cache trimming vs. fuse in recent kernels, but we thought we had workarounds in place...

Cheers,
John

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux