How to reset and configure replication on multiple RGW servers from scratch?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

For testing purposes, I configured RGW multisite synchronization between two ceph mimic 13.2.6 clusters (I also tried: 13.2.5).
Now I want to reset all current settings and configure replication from scratch.

Data(pools, buckets) on the master zone will not be deleted.

What has been done:
1) Deleted the secondary zone
# radosgw-admin zone delete --rgw-zone=dc2_zone

2) Removed the secondary zone from zonegroup
# radosgw-admin zonegroup remove --rgw-zonegroup=master_zonegroup --rgw-zone=dc2_zone

3) Commited changes
# radosgw-admin period update --commit

4) Trimmed all datalogs on master zone
# radosgw-admin datalog trim --start-date="2019-06-12 12:01:54" --end-date="2019-06-22 12:01:56"

5) Trimmed all error sync on master zone
# radosgw-admin sync error trim --start-date="2019-06-07 07:19:26" --end-date="2019-06-22 15:59:00"

6) Deleted and recreated empty pools on secondary cluster:
    dc2_zone.rgw.control
    dc2_zone.rgw.meta
    dc2_zone.rgw.log
    dc2_zone.rgw.buckets.index
    dc2_zone.rgw.buckets.data

Should I clear any other data / metadata in the master zone?
Can data be kept somewhere in the master zone that may affect the new replication statement?

I'm trying to track down a problem with blocked shards synchronization.


Thank you in advance for your help.

Best regards,

Piotr Osiński

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux