Re: Syncing cephfs from Ceph to Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-09-09 15:51, Eugen Block wrote:
> Hi Simon,
> 
>> What about the idea of creating the cluster over two data centers?
>> Would it be possible to modify the crush map, so one pool gets
>> replicated over those two data centers and if one fails, the other one
>> would still be functional?
> 
> A stretched cluster is a valid approach, but you have to consider
> several things like MON quorum (you'll need a third MON independent of
> the two DCs) and failure domains and resiliency. The crush map and rules
> can be easily adjusted to reflect two DCs.

There is a PR open to _explicitly_ support stretch clusters in Ceph [1].
To get it explained you can watch this presentation Gregory Farnum gave
at FOSDEM 2020 [2]. Fortunately you can pause the video as he is going
quite fast ;-).

Better than two DCs are three DCs: that just works. And even better than
theee are of course four DCs ... so you can recover from a complete DC
failure ...

Gr. Stefan

[1]: https://github.com/ceph/ceph/pull/32336
[2]:
https://archive.fosdem.org/2020/schedule/event/sds_ceph_stretch_clusters/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux