Monitors are in charge of the CRUSH map. When ever there is a change to the CRUSH map, an OSD goes down, a new OSD is added, PGs are increased, etc, the monitor(s) builds a new CRUSH map and distributes it to all clients and OSDs. Once the client has the CRUSH map, it does not need to contact the monitor for placement or retrieval of an object because any object's locate can be computed by the client.[1][2] Having your monitors on a 1 Gb link may be just fine based on the number of OSDs you have and what things look like when you are doing backfills. It is suggested that the monitors have very fast disks as it makes sure things are committed to disk before sending new maps to clients/OSDs. [1] http://ceph.com/docs/master/rados/operations/crush-map/ [2] http://ceph.com/docs/master/architecture/#scalability-and-high-availability On Tue, Jan 6, 2015 at 1:37 PM, Logan Barfield <lbarfield@xxxxxxxxxxxxx> wrote: > Do monitors have any impact on read/write latencies? Everything I've read > says no, but since a client needs to talk to a monitor before reading or > writing to OSDs it would seem like that would introduce some overhead. > > I ask for two reasons: > 1) We are currently using SSD based OSD nodes for our RBD pools. These > nodes are connected to our hypervisors over 10Gbit links for VM block > devices. The rest of the cluster is on 1Gbit links, so the RBD nodes > contact the monitors across 1Gbit instead of 1Gbit. I'm not sure if this > would degrade performance at all. > > 2) In a multi-datacenter cluster a client may end up contacting a monitor > located in a remote location (e.g., over a high latency WAN link). I would > think the client would have to wait for a response from the monitor before > beginning read/write operations on the local OSDs. > > I'm not sure exactly what the monitor interactions are. Do clients only > pull the cluster map from the monitors (then ping it occasionally for > updates), or do clients talk to the monitors any time they write a new > object to determine what placement group / OSDs to write to or read from? > > > Thank You, > > Logan Barfield > Tranquil Hosting > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com