Hi, To summarize, my principal question is: in a ceph cluster, is it possible to have, among the monitors, one monitor not necessarily very efficient and with potentially network access latencies and still avoid a negative effect on the cluster? I explain the context of my question because it's important. Let's suppose I have 2 datacenters: dc1 and dc2. And let's suppose that we can consider the network connection between dc1 and dc2 as a real LAN, ie no problem of latency between dc1 and dc2 etc. I'm thinking about how to dispatch the monitors between dc1 and dc2. Let's suppose I have 5 monitors. I can put 2 monitors on dc1 and 3 monitors on dc2. If the connection between dc1 and dc2 is cut, then the cluster in dc2 will continue to work well because in dc2 the quorum of monitors is reached but in dc1 the cluster will be stopped (no quorum). Now, what happens if I do this: I put 2 monitors in dc1, 2 monitors on dc2 and I put the 5th monitors in the WAN, for instance in a VM linked with the cluster network by a VPN tunnel (I have one VPN tunnel between mon.5 and dc1 and one VPN tunnel between mon.5 and dc2). In this case, if the connection between dc1 and dc2 is cut (but WAN connection and the VPN tunnels in dc1 and dc2 are Ok), in theory the cluster will continue to work in dc1 and in dc2 because the quorum is reached (mon.1, mon.2 and mon.5 in dc1 and mon.3, mon.4 and mon.5 in dc2). Is it correct? But in this case, how does it work? If a client in dc1 writes data in the OSDs of dc1, the data will be not present in the OSD of dc2. It seems to me that it's a big problem, unless in fact the cluster does not work in the conditions that I've described... And if the mon.5 is not very efficient with network access latencies, is it a problem for the ceph clients? If I just indicate the ip addresses of mon.1 and mon.2 for the clients in dc1 in the ceph.conf file and if I just indicate the ip addresses of mon.3 and mon.4 for the clients in dc2, can I hope to avoid the slowness that can generate the mon.5 in the WAN? Thanks in advance for your help. -- François Lafont _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com