RadosGW replication and failover issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
We are running the following radosgw( luminous 12.2.8) replications scenario. 
1) We have 2 clusters, each running a radosgw, Cluster1 defined as master, and Cluster2 as slave.
2) We create a number of bucket with objects via master and slave
3) We shutdown the Cluster1
4) We execute failover on Cluster2: radosgw-admin zone modify --master --default
                                                          radosgw-admin period update --commit
5) We create some new bucket and delete some existing bucket that were created in Step 2
6) We restart Cluster1, and execute :radosgw-admin realm pull
                                                           radosgw-admin period pull
7) We saw that resync has finished succesfull and Cluster1 is defined as slave and Cluster2 as master

The issue that now we see in Cluster1 the buckets that were deleted in Step5 ( while this cluster was down). We have waited awhile to see if maybe there were some objects left that should be deleted by GC, but even after a few hours those buckets are still visible in Cluster1 and not visible in Cluster2

We also tried:
6) We restart Cluster1, and execute :radosgw-admin period  pull
 But then we see that sync is stuck, both of the clusters are defined as masters, and Cluster1 current period is the one before last period of Cluster2

How can we fix this issue? Is there some config command that should be called during failover?

Thanks,
Ronnie Lazar
R&D


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux