Keep in mind that the mds server is cpu-bound, so during heavy workloads it will eat up CPU usage, so the OSD daemons can affect or be affected by the MDS daemon.
Regards,
But it does work well. We've been running a few clusters with MON, MDS and OSDs sharing the same hosts for a couple of years now.
Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
IRC NICK - WebertRLZ
On Tue, Jun 19, 2018 at 11:03 AM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
Just co-locate them with your OSDs. You can can control how much RAM the MDSs use with the "mds cache memory limit" option. (default 1 GB)Note that the cache should be large enough RAM to keep the active working set in the mds cache but 1 million files is not really a lot.As a rule of thumb: ~1GB of MDS cache per ~100k files.64GB of RAM for 12 OSDs and an MDS is enough in most cases.Paul_______________________________________________2018-06-19 15:34 GMT+02:00 Denny Fuchs <linuxmail@xxxxxxxx>:Hi,
Am 19.06.2018 15:14, schrieb Stefan Kooman:
Storage doesn't matter for MDS, as they won't use it to store ceph data
(but instead use the (meta)data pool to store meta data).
I would not colocate the MDS daemons with the OSDS, but instead create a
couple of VMs (active / standby) and give them as much RAM as you
possibly can.
thanks a lot. I think, we would start with round about 8GB and see, what happens.
cu denny
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com