Inter-region data replication through radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Lewis!
With your way, there will be a contradition because of the limit of secondary zone.
In secondary zone, one can't do any files operations.
Let me give some example.I define the symbols first.


The instances of cluster 1:
M1: master zone of cluster 1
S2: Slave zone for M2 of cluster2, the files of cluster2 will be synced from M2 to S2
I13: the third instance of cluster 1(M1 and S2 are both the instances too.)


The instances of cluster 2:
M2: master zone of cluster 2
S1: Slave zone for M1 of cluster 1, the files of cluster1 will be synced from M1 to S1
I23: the third instance of cluster 1(M2 and S1 are both the instances too.)


cluster 1:  M1  S2  I13
cluster 2:  M2  S1  I23


Questions:
1. If I upload objects form I13 of cluster 1, is it synced to cluster 2 from M1?
2. In cluster 1, can I do some operations for the files synced from cluster2 through M1 or I13?
3. If I upload an object in cluster 1, the matadata will be synced to cluster 2 before file data.When  matadata of it has been synced but filedata not, cluster 1 is down, that is to say the object hasnot been synced yet.Then I upload the same object in cluster 2. Can it succeed?
I think it will fail. cluster 2 has the matadata of object and will consider the object is in cluster 2, and this object is synced from cluster 1, so I have no permission to operate it.
Do I right? 


Because of the limit of files operations in slave zone, I think there will be some contradition.


Looking forward to your reply.
Thanks!







At 2014-05-22 07:12:17,"Craig Lewis" <clewis at centraldesktop.com> wrote:

On 5/21/14 09:02 , Fabrizio G. Ventola wrote:

Hi everybody,

I'm reading the doc regarding the replication through radosgw. It
talks just about inter-region METAdata replication, nothing about data
replication.

My question is, it's possible to have (everything) geo-replicated
through radosgw? Actually we have 2 ceph cluster (geo-dislocated)
instances and we wanna exploit the radosgw to make replicas across our
two clusters.

It's possible to read/write on both replicas (one placed on primary
region and one on the secondary one) done through radosgw? I'm
wondering because on the doc it's suggested to write just on a master
zone, avoiding to write on secondary zones. It's the same for
primary/secondary regions?


Cheers,
Fabrizio
_______________________________________________
ceph-users mailing list
ceph-users at lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

The federated setup will replicate both data and metadata.  You can do just metadata if you want, but it's not the default.

You can have all of the RadosGW data geo-replicated.  Raw Rados isn't possible yet, and rbd under development.

You can read from both the master and slave, but you don't want to write to a slave.  The master and slave have different URLs, so it's up to you to use the appropriate URL.

You can run multiple zones in each cluster, as long as each zone has it's own URL.  If you do this, you might want to share apache/radosgw/osd across all the zone, or dedicated them to specific zones.  It's entirely possible multiple zones in one cluster share everything, or just the monitors.

If you really want both clusters to handle writes, this is how you'd do it:
ClusterWest1 contains us-west-1 (master), and us-west-2 (slave for us-east-2).
ClusterEast1 constains us-east-1 (slave for us-west-1), and us-east-2 (master).
If users and buckets need to be globally unique across all zones, setup metadata (not data) replication between the two zones.
Write to us-west-1 or us-east-2, up to you.


This replication setup to make more sense when you have more than 3+ data centers, and you set them up in a ring.


Does that help?


--


Craig Lewis
Senior Systems Engineer
Office +1.714.602.1309
Email clewis at centraldesktop.com

Central Desktop. Work together in ways you never thought possible.
Connect with us   Website  |  Twitter  |  Facebook  |  LinkedIn  |  Blog
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140522/63c93728/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux