How is split brain situations handled in ceph?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi

I was talking about a potential ceph setup, and a question arose about how ceph handles a potential split brain situation. I had my own ideas on how that would be handled, but I want to consult with the wider knowledge base here to verify my understanding.

So, let's say we have two data centres. Replication is configured so there are 3 replicas, and at least one copy in each data centre. Also, there are an odd numer of MONs in the cluster.

If we now get a net split, so we end up with 2 replicas in one dc (A), 1
in the other (B). In theory we should be good, as no data is lost and if there are more than one OSD in B it will re-balance.

But what happens now when it comes to writes? If we write to both sides of the split, we've lost.

If there are 1 MON in B, that cluster will have quorum within itself and keep running, and in A the MON cluster will vote and reach quorum again. In that case we have two clusters, both accepting writes to the same objects in those 2 and 1 replicas.

So, how does the praxos, crush and other protocols make sure not both sides of the split is active?

Pointers in the documentation appreciated, as well as other explanations.

/andreas

--
"economics is a pseudoscience; the astrology of our time"
Kim Stanley Robinson
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux