_______________________________ > Date: Fri, 24 Apr 2015 17:29:40 +0530 > From: vumrao@xxxxxxxxxx > To: ceph-users@xxxxxxxxxxxxxx > Subject: Re: rgw geo-replication > > > On 04/24/2015 05:17 PM, GuangYang wrote: > > Hi cephers, > Recently I am investigating the geo-replication of rgw, from the example at [1], it looks like if we want to do data geo replication between us east and us west, we will need to build *one* (super) RADOS cluster which cross us east and west, and only deploy two different radosgw instances. Is my understanding correct here? > > You can do that but it is not recommended , I think doc says it would > be very good if you have two clusters with different radosgw servers. > https://ceph.com/docs/master/radosgw/federated-config/#background > > 1. You may deploy a single Ceph Storage Cluster with a federated > architecture if you have low latency network connections (this isn’t > recommended). > > 2. You may also deploy one Ceph Storage Cluster per region with a > separate set of pools for each zone (typical). This confuse me.. So in the typical recommendation, we will need to deploy a cluster which has mon/osd cross us east and west (say using CRUSH we can create two sets of pools corresponding to zones), from a failure's point of view, if there is an outage of the cluster, it will impact availability, as oppose if we have two clusters and replicate data between those two clusters, if there is outage of one cluster, we can redirect all traffic to the other one. > > 3. You may also deploy a separate Ceph Storage Cluster for each zone if > your requirements and resources warrant this level of redundancy. I think this makes more sense. One region but the region is only logically, two zones within that region, each zone (physically) maps to a standalone cluster and replicate data between those zones/clusters. Does that support? Thanks, Guang > > Regards, > Vikhyat > > > > If that is the case, is there any reason preventing us to deploy two completed isolated clusters (not only rgw, but only mon and osd) and replicate data between them? > > [1] https://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication > > > Thanks, > Guang > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > _______________________________________________ ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com