Re: Minimal MDS for CephFS on OSD hosts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just co-locate them with your OSDs. You can can control how much RAM the MDSs use with the "mds cache memory limit" option. (default 1 GB)
Note that the cache should be large enough RAM to keep the active working set in the mds cache but 1 million files is not really a lot.
As a rule of thumb: ~1GB of MDS cache per ~100k files.

64GB of RAM for 12 OSDs and an MDS is enough in most cases.

Paul

2018-06-19 15:34 GMT+02:00 Denny Fuchs <linuxmail@xxxxxxxx>:
Hi,

Am 19.06.2018 15:14, schrieb Stefan Kooman:

Storage doesn't matter for MDS, as they won't use it to store ceph data
(but instead use the (meta)data pool to store meta data).
I would not colocate the MDS daemons with the OSDS, but instead create a
couple of VMs (active / standby) and give them as much RAM as you
possibly can.

thanks a lot. I think, we would start with round about 8GB and see, what happens.

cu denny

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux