Re: After Luminous upgrade: ceph-fuse clients failingtorespond to cache pressure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


On 01/16/2018 09:50 PM, Andras Pataki wrote:
Dear Cephers,

*snipsnap*



We are running with a larger MDS cache than usual, we have mds_cache_size set to 4 million.  All other MDS configs are the defaults.

AFAIK the MDS cache management in luminous has changed, focusing on memory size instead of number of inodes/caps/whatever.

We had to replace mds_cache_size with mds_cache_memory_limit to get mds cache working as expected again. This may also be the cause for the issue, since the default configuration uses quite a small cache. You can check this with 'ceph daemonperf mds.XYZ' on the mds host.

If you change the memory limit you also need to consider a certain overhead of the memory allocation. There was a thread about this on the mailing list some weeks ago; you should expect at least 50% overhead. As with the previous releases this is not a hard limit. The process may consume more memory in certain situations. Given the fact that bluestore osds do not use kernel page cache anymore but their own memory cache, you need to plan memory consumption of all ceph daemons.

As an example, our mds is configured with mds_cache_memory_limit = 8000000000 and is consuming about 12 GB memory RSS.

Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux