Burkhard,
Thanks very much for info - I'll try the MDS with a 16GB
mds_cache_memory_limit (which leaves some buffer for extra memory
consumption on the machine), and report back if there are any issues
remaining.
Andras
On 01/17/2018 02:40 AM, Burkhard Linke wrote:
Hi,
On 01/16/2018 09:50 PM, Andras Pataki wrote:
Dear Cephers,
*snipsnap*
We are running with a larger MDS cache than usual, we have
mds_cache_size set to 4 million. All other MDS configs are the
defaults.
AFAIK the MDS cache management in luminous has changed, focusing on
memory size instead of number of inodes/caps/whatever.
We had to replace mds_cache_size with mds_cache_memory_limit to get
mds cache working as expected again. This may also be the cause for
the issue, since the default configuration uses quite a small cache.
You can check this with 'ceph daemonperf mds.XYZ' on the mds host.
If you change the memory limit you also need to consider a certain
overhead of the memory allocation. There was a thread about this on
the mailing list some weeks ago; you should expect at least 50%
overhead. As with the previous releases this is not a hard limit. The
process may consume more memory in certain situations. Given the fact
that bluestore osds do not use kernel page cache anymore but their own
memory cache, you need to plan memory consumption of all ceph daemons.
As an example, our mds is configured with mds_cache_memory_limit =
8000000000 and is consuming about 12 GB memory RSS.
Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com