Re: CephFS/ceph-fuse performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/06/2018 12:22 PM, Andras Pataki wrote:
> Hi Greg,
>
> The docs say that client_cache_size is the number of inodes that are 
> cached, not bytes of data.  Is that incorrect?

Oh whoops, you're correct of course. Sorry about that!

On Wed, Jun 6, 2018 at 12:33 PM Andras Pataki <apataki@xxxxxxxxxxxxxxxxxxxxx> wrote:
Staring at the logs a bit more it seems like the following lines might
be the clue:

2018-06-06 08:14:17.615359 7fffefa45700 10 objectcacher trim  start:
bytes: max 2147483640  clean 2145935360, objects: max 8192 current 8192
2018-06-06 08:14:17.615361 7fffefa45700 10 objectcacher trim finish: 
max 2147483640  clean 2145935360, objects: max 8192 current 8192

The object caching could not free objects up to cache new ones perhaps
(it was caching 8192 objects which is the maximum in the config)?  Not
sure why that would be though.  Unfortunately the job since then
terminated so I can't look at the caches any longer of the client.

Yeah, that's got to be why. I don't *think* there's any reason to set a reachable limit on number of objects. It may not be able to free them if they're still dirty and haven't been flushed; that ought to be the only reason. Or maybe you've discovered some bug in the caching code, but...well, it's not super likely.
-Greg 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux