Re: How to dispatch monitors in a multi-site cluster (ie in 2 datacenters)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Somnath Roy wrote:

> Interesting scenario :-).. IMHO, I don't think cluster will be in healthy state here if the connections between dc1 and dc2 is cut. The reason is the following.
> 
> 1. only osd.5 can talk to both data center  OSDs and other 2 mons will not be. So, they can't reach to an agreement (and form quorum) about the state of OSDs.
> 
> 2. OSDs on dc1 and dc2 will not be able to talk to each other, considering replicas across data centers, the cluster will be broken.

Yes, in fact, after thought, I have the first question below.

If: (more clear with a schema is the head ;))

    1. mon.1 and mon.2 can talk together (in dc1) and can talk with mon.5 (via the VPN)
       but can't talk with mon.3 and mon.4 (in dc2)
    2. mon.3 and mon.4 can talk together (in dc2) and can talk with mon.5 (via the VPN)
       but can't talk with mon.1 and mon.2 (in dc1)
    3. mon.5 can talk with mon.1, mon.2, mon.3, mon.4 and mon.5

is the quorum reached? If yes, which is the quorum?

In fact, it seems to me very strange that it could have 2 different quorums
in the same cluster. Maybe there will be no quorum and this will be the end
of the story. ;)

> But, I am not 100% sure here, let's wait for responses from Ceph gurus..

Yes good idea. ;)

-- 
François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux