Upper limit of MONs and MDSs in a Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



How much testing has there been / what are the implications of having a large number of Monitor and Metadata daemons running in a cluster. 

Thus far I  have deployed all of our Ceph clusters as a single service type per physical machine but I am interested in a use case where we deploy dozens/hundreds? of boxes each of which would be a mon,mds,mgr,osd,rgw all in one and all a single cluster. I do realize it is somewhat trivial (with config mgmt and all) to dedicate a couple of lean boxes as MDS's and MONs and only expand at the OSD level but I'm still curious.

My use case in mind is for backup targets where pools span the entire cluster and am looking to streamline the process for possible rack and stack situations where boxes can just be added in place booted up and they auto-join the cluster as a mon/mds/mgr/osd/rgw.

So does anyone run clusters with dozen's of MONs' and/or MDS or aware of any testing with very high numbers of each? At the MDS level I would just be looking for 1 Active, 1 Standby-replay and X standby until multiple active MDSs are production ready. Thanks!

--
Respectfully,

Wes Dillingham
Research Computing | Infrastructure Engineer
Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 102

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux