I wrote a test case with Python:
'''
import os
for i in range(200):
dir_name = '/srv/ceph_fs/test/d%s'%i
os.mkdir(dir_name)
for j in range(3):
with open('%s/%s'%(dir_name, j), 'w') as f:
f.write('0')
'''
The output of status command after the testing:
{
"metadata": {
"ceph_sha1": "e4bfad3a3c51054df7e537a724c8d0bf9be972ff",
"ceph_version": "ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)",
"entity_id": "admin",
"hostname": "local-share-server",
"mount_point": "\/srv\/ceph_fs"
},
"dentry_count": 204,
"dentry_pinned_count": 201,
"inode_count": 802,
"mds_epoch": 25,
"osd_epoch": 177,
"osd_epoch_barrier": 176
}
It seems that all pinned dentrys are directories.
Attachment is a package of debug log and dump cache content.
On Wed, Apr 29, 2015 at 10:07 PM Gregory Farnum <greg@xxxxxxxxxxx> wrote:
On Wed, Apr 29, 2015 at 1:33 AM, Dexter Xiong <dxtxiong@xxxxxxxxx> wrote:
> The output of status command of fuse daemon:
> "dentry_count": 128966,
> "dentry_pinned_count": 128965,
> "inode_count": 409696,
> I saw the pinned dentry is nearly the same as dentry.
> So I enabled debug log(debug client = 20/20) and read Client.cc source
> code in general. I found that an entry will not be trimed if it is pinned.
> But how can I unpin dentrys?
How much of the workload does that log cover? The most common (only?)
way to pin a dentry is by holding the file open, which makes me think
your system isn't closing its files for some reason. That doesn't make
a ton of sense for rsync; there could maybe be some kind of bug we
haven't noticed... :/
-Greg
>
> On Wed, Apr 29, 2015 at 12:19 PM Dexter Xiong <dxtxiong@xxxxxxxxx> wrote:
>>
>> I tried set client cache size = 100, but it doesn't solve the problem.
>> I tested ceph-fuse with kernel version 3.13.0-24 3.13.0-49 and 3.16.0-34.
>>
>>
>>
>> On Tue, Apr 28, 2015 at 7:39 PM John Spray <john.spray@xxxxxxxxxx> wrote:
>>>
>>>
>>>
>>> On 28/04/2015 06:55, Dexter Xiong wrote:
>>> > Hi,
>>> > I've deployed a small hammer cluster 0.94.1. And I mount it via
>>> > ceph-fuse on Ubuntu 14.04. After several hours I found that the
>>> > ceph-fuse process crashed. The end is the crash log from
>>> > /var/log/ceph/ceph-client.admin.log. The memory cost of ceph-fuse
>>> > process was huge(more than 4GB) when it crashed.
>>> > Then I did some test and found these actions will increase memory
>>> > cost of ceph-fuse rapidly and the memory cost never seem to decrease:
>>> >
>>> > * rsync command to sync small files(rsync -a /mnt/some_small
>>> > /srv/ceph)
>>> > * chown command/ chmod command(chmod 775 /srv/ceph -R)
>>> >
>>> > But chown/chmod command on accessed files will not increase the memory
>>> > cost.
>>> > It seems that ceph-fuse caches the file nodes but never releases them.
>>> > I don't know if there is an option to control the cache size. I
>>> > set mds cache size = 2147483647 option to improve the performance of
>>> > mds, and I tried to set mds cache size = 1000 at client side but it
>>> > doesn't effect the result.
>>>
>>> The setting for client-side cache limit is "client cache size", default
>>> is 16384
>>>
>>> What kernel version are you using on the client? There have been some
>>> issues with cache trimming vs. fuse in recent kernels, but we thought we
>>> had workarounds in place...
>>>
>>> Cheers,
>>> John
>>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
Attachment:
ceph.debug.log.tar.gz
Description: application/gzip
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com