Hi Yixin,
On 4/25/23 00:21, Yixin Jin wrote:
Actually, "bucket sync run" somehow made it worse since now the destination zone shows "bucket is caught up with source" from "bucket sync status" even though it clearly missed an object.
On Monday, April 24, 2023 at 02:37:46 p.m. EDT, Yixin Jin <yjin77@xxxxxxxx> wrote:
An update:
After creating and enabling the bucket sync policy, I ran "bucket sync markers" and saw that each shard had the status of "init". The run "bucket sync run" in the end marked the status to be "incremental-sync", which seems to go through full-sync stage. However, the lone object in the source zone wasn't synced over to the destination zone.
I actually used gdb to walk through radosgw-admin to run "bucket sync run". It seems not to do anything for full-sync and it printed a log saying "finished iterating over all available prefixes:...", which actually broke off the do-while loop after the call to prefix_handler.revalidate_marker(&list_marker). This call returned false because it couldn't find rules from the sync pipe. I haven't drilled deeper to see why it didn't get rules, whatever it means. Nevertheless, the workaround with "bucket sync run" doesn't seem to work, at least not with Quincy.
As Matt mentioned, we have been fixing couple of issues related to
sync-policy lately (eg., https://tracker.ceph.com/issues/58518) . Could
you please re-test on the current mainline.
I tested this scenario -
1) Create Zonegroup Policy and set it to Allowed
2) Create 'testobject' on the primary zone
3) Create Bucket level policy and set it to Enabled
4) Check if 'testobject' is synced to secondary zone
As expected, this object was not synced but post running "bucket sync
run" cmd from the secondary zone, the object got synced .
Let me know if I missed any step from your testcase.
Thanks,
Soumya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx