On 5/21/14 22:55 , wsnote wrote: > Hi, Lewis! > With your way, there will be a contradition because of the limit of > secondary zone. > In secondary zone, one can't do any files operations. > Let me give some example.I define the symbols first. > > The instances of cluster 1: > M1: master zone of cluster 1 > S2: Slave zone for M2 of cluster2, the files of cluster2 will be > synced from M2 to S2 > I13: the third instance of cluster 1(M1 and S2 are both the instances > too.) > > The instances of cluster 2: > M2: master zone of cluster 2 > S1: Slave zone for M1 of cluster 1, the files of cluster1 will be > synced from M1 to S1 > I23: the third instance of cluster 1(M2 and S1 are both the instances > too.) > > cluster 1: M1 S2 I13 > cluster 2: M2 S1 I23 > > Questions: > 1. If I upload objects form I13 of cluster 1, is it synced to cluster > 2 from M1? I'm assuming that I23's description should be "the third instance of cluster 2", not "the third instance of cluster 1". If so, the answer is no. You you haven't configured I13 to replicate to I23. Replication happens between zones. In this example, you'll have two replication agents running. One in Cluster2, copying data from M1 to S1. One in Cluster1, copying data from M2 to S2. There's no reason you couldn't setup replication from I12 to I23 if you want. But I don't see why you wouldn't just use M1 in that case. > 2. In cluster 1, can I do some operations for the files synced from > cluster2 through M1 or I13? In cluster1, all operations you do to M1 will be replicated to S1 in cluster2. Uploading, Overwriting, or Deleting objects in M1 will delete them in S1. > 3. If I upload an object in cluster 1, the matadata will be synced to > cluster 2 before file data.When matadata of it has been synced but > filedata not, cluster 1 is down, that is to say the object hasnot been > synced yet.Then I upload the same object in cluster 2. Can it succeed? Metadata is synced pretty much the same time as data. I tested replication by deliberately importing into the master zone faster than replication could handle. It will take the slave another 2 weeks to finish catching up. It has ~50% of the objects right now. If the object hasn't been replicated yet, the slave zone doesn't know it exists. Here's an object that I just created in the master zone: clewis at clewis ~ (-) $ s3prod.master ls s3://live-23/17c23967ca275cf606f3cd5151b03d393eed836d754e670a00b878a4fe9abc73 2014-05-29 02:26 1354k 91decb5e8bc658079f030517937ff6b8 s3://live-23/17c23967ca275cf606f3cd5151b03d393eed836d754e670a00b878a4fe9abc73 clewis at clewis ~ (-) $ s3prod.slave ls s3://live-23/17c23967ca275cf606f3cd5151b03d393eed836d754e670a00b878a4fe9abc73 The slave has no record that this object exists (yet). > I think it will fail. cluster 2 has the matadata of object and will > consider the object is in cluster 2, and this object is synced from > cluster 1, so I have no permission to operate it. > Do I right? Whether or not you can upload that object to the slave zone, I replied with a lot of guess in your other question titled "Questions about zone and disater recovery". I intuition is that once you do this, you're going to break replication. At that point, the slave becomes the new master, and you need to delete the old master and replicate back. This is pretty common in replication scenarios. I have to do this when my PostgreSQL servers fail from master to secondary. > > Because of the limit of files operations in slave zone, I think there > will be some contradition. > > Looking forward to your reply. > Thanks! > -- *Craig Lewis* Senior Systems Engineer Office +1.714.602.1309 Email clewis at centraldesktop.com <mailto:clewis at centraldesktop.com> *Central Desktop. Work together in ways you never thought possible.* Connect with us Website <http://www.centraldesktop.com/> | Twitter <http://www.twitter.com/centraldesktop> | Facebook <http://www.facebook.com/CentralDesktop> | LinkedIn <http://www.linkedin.com/groups?gid=147417> | Blog <http://cdblog.centraldesktop.com/> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140528/ab3d9619/attachment.htm>