Re: How to dispatch monitors in a multi-site cluster (ie in 2 datacenters)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Francois,
Interesting scenario :-).. IMHO, I don't think cluster will be in healthy state here if the connections between dc1 and dc2 is cut. The reason is the following.

1. only osd.5 can talk to both data center  OSDs and other 2 mons will not be. So, they can't reach to an agreement (and form quorum) about the state of OSDs.

2. OSDs on dc1 and dc2 will not be able to talk to each other, considering replicas across data centers, the cluster will be broken.


But, I am not 100% sure here, let's wait for responses from Ceph gurus..

Thanks & Regards
Somnath


-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Francois Lafont
Sent: Sunday, April 12, 2015 10:04 AM
To: ceph-users@xxxxxxxx
Subject:  How to dispatch monitors in a multi-site cluster (ie in 2 datacenters)

Hi,

To summarize, my principal question is: in a ceph cluster, is it possible to have, among the monitors, one monitor not necessarily very efficient and with potentially network access latencies and still avoid a negative effect on the cluster?

I explain the context of my question because it's important. Let's suppose I have 2 datacenters: dc1 and dc2. And let's suppose that we can consider the network connection between dc1 and dc2 as a real LAN, ie no problem of latency between dc1 and dc2 etc. I'm thinking about how to dispatch the monitors between dc1 and dc2. Let's suppose I have 5 monitors. I can put 2 monitors on dc1 and 3 monitors on dc2. If the connection between dc1 and
dc2 is cut, then the cluster in dc2 will continue to work well because in
dc2 the quorum of monitors is reached but in dc1 the cluster will be stopped (no quorum).

Now, what happens if I do this: I put 2 monitors in dc1, 2 monitors on
dc2 and I put the 5th monitors in the WAN, for instance in a VM linked with the cluster network by a VPN tunnel (I have one VPN tunnel between
mon.5 and dc1 and one VPN tunnel between mon.5 and dc2). In this case, if the connection between dc1 and dc2 is cut (but WAN connection and the VPN tunnels in dc1 and dc2 are Ok), in theory the cluster will continue to work in dc1 and in dc2 because the quorum is reached (mon.1, mon.2 and
mon.5 in dc1 and mon.3, mon.4 and mon.5 in dc2). Is it correct?

But in this case, how does it work? If a client in dc1 writes data in the OSDs of dc1, the data will be not present in the OSD of dc2. It seems to me that it's a big problem, unless in fact the cluster does not work in the conditions that I've described...

And if the mon.5 is not very efficient with network access latencies, is it a problem for the ceph clients? If I just indicate the ip addresses of
mon.1 and mon.2 for the clients in dc1 in the ceph.conf file and if I just indicate the ip addresses of mon.3 and mon.4 for the clients in dc2, can I hope to avoid the slowness that can generate the mon.5 in the WAN?

Thanks in advance for your help.

--
François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux