[DR] master is on a different period

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Guys,

    After a disaster recovery process, making a Secondary zone the Master,
and the old Master as a Secondary zone. We could see the metadata stop
syncing between the clusters, and any new bucket or users is replicated to
Secondary Zone.

*Version Running: 10.2.6*

Running "radosgw-admin sync status" on Master cluster, We got:

  realm 7792a922-cfc6-4eb0-a01d-e4ba327ee8ad (am)
      zonegroup 8585c1cb-fae0-4d2f-bf16-48b6072c8587 (us)
           zone 3dac5e2c-a17b-4d2e-a3e1-3ad6cb7cf72f (us-west-1)
  metadata sync no sync (zone is master)
      data sync source: 6ae07871-424f-4cf8-8aaa-e1b84e4babdf (us-east-1)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source

And in the Secondary:

      realm 7792a922-cfc6-4eb0-a01d-e4ba327ee8ad (am)
      zonegroup 8585c1cb-fae0-4d2f-bf16-48b6072c8587 (us)
           zone 6ae07871-424f-4cf8-8aaa-e1b84e4babdf (us-east-1)
  metadata sync syncing
                full sync: 0/64 shards
                master is on a different period:
master_period=942e0826-7026-4aad-95d1-6ddd58e7de30
local_period=2c3962e2-acc3-41e5-b625-34fcc6f8176a
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 3dac5e2c-a17b-4d2e-a3e1-3ad6cb7cf72f (us-west-1)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is caught up with source

Here, We could see the Secondary cluster reporting a different period on
Master, however is not true, because both clusters has the same period:

 radosgw-admin period get-current
{
    "current_period": "942e0826-7026-4aad-95d1-6ddd58e7de30"
}

Apparently, it was a bug that was fixed in 10.2.6, but still happening

http://tracker.ceph.com/issues/18684

Do You guys have any ideia or workaround to get the secondary cluster
working again ?


Best Regards,
Daniel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170308/a8480cea/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux