> We have application cluster and ceph as storage solution, cluster consists of six servers, so we've installed > monitor on every one of them, to have ceph cluster sane (quorum) if server or two of them goes down. You want an odd number for sure, to avoid the classic split-brain problem: http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ I think the bit re diminishing returns with >5 mons was told to me by a consultant, but I don’t have a reference. The more you have the more traffic they have to exchange among themselves, I’m thinking that’s probably not a huge deal until N gets a lot bigger. > or is it not necessary/recommended to have mon on node with osds? I’ve read multiple documents recommending against an AIO config, IIRC e.g.. so that heavy backfilling or client operations to the OSD’s don’t starve the mons. Best to Google around a bit, the size/density/number/workload of your OSD’s is likely a significant factor. On a small cluster I can see the appeal of an AIO strategy, unless you perhaps have hypervisors on the appropriate network and might consider running mons as VM’s with resource reservations. —aad _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com