Re: ceph replication and data redundancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 01/21/2013 02:08 PM, Joao Eduardo Luis wrote:
On 01/21/2013 08:14 AM, Ulysse 31 wrote:
Hi everybody,

In fact, i found searching the doc on section "adding/removing a
monitor", infos about the paxos system used for quorum establishment.
Following the documentation, in a catastrophy scenario, i need to
remove the other monitors configured on the other buildings.
For better efficiency, i think i'll keep 1 monitor per building, and,
if two other building fails, i will delete those two monitors from the
configuration in order to access data again.
I'll simulate that and see if it goes well.
Thanks for your help and advices.

If you are set on that approach, you could just as well add a third
monitor on one of the buildings (whichever you feel to be more
resilient), and cut down the chances of an unavailable cluster if the
other fails.

It doesn't solve your problem, but if the building with just one monitor
fails, your cluster will still be available; if it's the other way
around, you could do the manual recovery just the same anyway.


Another approach, if possible try to add a 3rd monitor in a "neutral" place.

I for sure don't know how your network looks like, but you might be able to put up a monitor in an external datacenter and do something with a VPN?

Assuming both buildings have their own external internet connection.

Wido

   -Joao

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux