On 12/11/2015 08:12 PM, Alex Gorbachev wrote: > This is a proactive message to summarize best practices and options > working with monitors, especially in a larger production environment > (larger for me is > 3 racks). > > I know MONs do not require a lot of resources, but prefer to run on SSDs > for response time. Also that you need an odd number, as you must have a > simple majority present. MON uses leveldb and that data is constantly > changing, so traditional backups are not relevant/useful. > > There has been uncertainty whether more than 3 MONs will cause any > performance issues in a cluster. To that extent, may I ask from both > the Ceph development community and the excellent power user contributors > on this list: > > - Is there any performance impact to running > 3 MONs? > No, but as your cluster grows larger with >100k PGs you might need additional monitors to handle all the PG stats. > - Is anyone running > 3 MONs in production and what are your experiences? > Yes, running 5 in multiple setups with >1000 OSDs. Works just fine. > - Has anyone had a need to back up their MONs and any recovery > experience, such > as http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors ? > I do it sometimes just for disaster purposes. Never needed them to recover a cluster. What you could do every day is: $ ceph osd dump -o osdmap $ ceph pg dump -o pgmap $ ceph mon getmap -o monmap $ ceph osd getcrushmap -o crushmap That will give you some metadata. > Our cluster has 8 racks right now, and I would love to place a MON at > the top of the rack (maybe on SDN switches in the future - why not?). > Thank you for helping answer these questions. > > -- > Alex Gorbachev > Storcium > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 9902 Skype: contact42on _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com