radosgw multi site different period

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,

i've installed a ceph multi site setup with two ceph clusters and each
one radosgw.
the multi site setup was in sync, so i tried a failover.
cluster A is going down and i've changed the zone (b) on cluster b to
the new master zone.
it's working fine.

now i start the cluster A and try to switch back the master zone to A.
cluster A believes that he is the master, cluster b is secondary.
but on the secondary is a different period and the bucket delta is not
synced to the new master zone:

root@ceph-a-1:~# radosgw-admin sync status
          realm 833e65be-268f-42c2-8f3c-9bab83ebbff2 (myrealm)
      zonegroup 15550dc6-a761-473f-81e8-0dc6cc5106bd (ceph)
           zone 51019cee-86fb-4b39-b6ba-282171c459c6 (a)
  metadata sync no sync (zone is master)
      data sync source: 082cd970-bd25-4cbc-a5fd-20f3b3f9dbd2 (b)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source

root@ceph-b-1:~# radosgw-admin sync status
          realm 833e65be-268f-42c2-8f3c-9bab83ebbff2 (myrealm)
      zonegroup 15550dc6-a761-473f-81e8-0dc6cc5106bd (ceph)
           zone 082cd970-bd25-4cbc-a5fd-20f3b3f9dbd2 (b)
  metadata sync syncing
                full sync: 0/64 shards
                master is on a different period:
master_period=b7392c41-9cbe-4d92-ad03-db607dd7d569
local_period=d306a847-77a6-4306-87c9-0bb4fa16cdc4
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 51019cee-86fb-4b39-b6ba-282171c459c6 (a)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source

how can i force sync the period and the bucket deltas?
i've used this howto: http://docs.ceph.com/docs/master/radosgw/multisit
e/

br Kim
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux