Re: Bucket sync policy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have basically given up relying on bucket sync to work properly in quincy.  I have been running a cron job to manually sync files between datacentres to catch the files that don't get replicated.  It's pretty inefficient, but at least all the files get to the backup datacentre.

Would love to have this working properly.

On 2023-04-24 16:56, Matt Benjamin wrote:
I'm unclear whether all of this currently works on upstream quincy
(apologies if all such backports have been done)?  You might retest against
reef or your ceph/main branch.

Matt

On Mon, Apr 24, 2023 at 2:52 PM Yixin Jin<yjin77@xxxxxxxx>  wrote:

  Actually, "bucket sync run" somehow made it worse since now the
destination zone shows "bucket is caught up with source" from "bucket sync
status" even though it clearly missed an object.

     On Monday, April 24, 2023 at 02:37:46 p.m. EDT, Yixin Jin <
yjin77@xxxxxxxx> wrote:

   An update:
After creating and enabling the bucket sync policy, I ran "bucket sync
markers" and saw that each shard had the status of "init". The run "bucket
sync run" in the end marked the status to be "incremental-sync", which
seems to go through full-sync stage. However, the lone object in the source
zone wasn't synced over to the destination zone.
I actually used gdb to walk through radosgw-admin to run "bucket sync
run". It seems not to do anything for full-sync and it printed a log saying
"finished iterating over all available prefixes:...", which actually broke
off the do-while loop after the call to
prefix_handler.revalidate_marker(&list_marker). This call returned false
because it couldn't find rules from the sync pipe. I haven't drilled deeper
to see why it didn't get rules, whatever it means. Nevertheless, the
workaround with "bucket sync run" doesn't seem to work, at least not with
Quincy.

Regards,Yixin

     On Monday, April 24, 2023 at 12:37:24 p.m. EDT, Soumya Koduri <
skoduri@xxxxxxxxxx> wrote:

  On 4/24/23 21:52, Yixin Jin wrote:
Hello ceph gurus,

We are trying bucket-specific sync policy feature with Quincy release
and we encounter something strange. Our test setup is very simple. I use
mstart.sh to spin up 3 clusters, configure them with a single realm, a
single zonegroup and 3 zones – z0, z1, z2, with z0 being the master. I
created a zonegroup-level sync policy with “allowed”, a symmetrical flow
among all 3 zones and a pipe allowing all zones to all zones. I created a
single bucket “test-bucket” at z0 and uploaded a single object to it. By
now, there should be no sync since the policy is “allowed” only and I can
see the single file only exist in z0 and “bucket sync status” shows the
sync is actually disabled. Finally, I created a bucket-specific sync policy
being “enabled” and a pipe between z0 and z1 only. I expected that sync
should be kicked off between z0 and z1 and I did see from “sync info” that
there are sources/dests being z0/z1. “bucket sync status” also shows the
source zone and source bucket. At z0, it shows everything is caught up but
at z1 it shows one shard is behind, which is expected since that only
object exists in z0 but not in z1.


Now, here comes the strange part. Although z1 shows there is one shard
behind, it doesn’t seem to make any progress on syncing it. It doesn’t seem
to do any full sync at all since “bucket sync status” shows “full sync:
0/11 shards”. There hasn’t been any full sync since otherwise, z1 should
have that only object. It is stuck in this condition forever until I make
another upload on the same object. I suspect the update of the object
triggers a new data log, which triggers the sync. Why wasn’t there a full
sync and how can one force a full sync?

yes this is known_issue yet to be addressed with bucket level sync
policy ( -https://tracker.ceph.com/issues/57489  ). The interim
workaround to sync existing objects  is to either

* create new objects (or)

* execute "bucket sync run"

after creating/enabling the bucket policy.

Please note that this issue is specific to only bucket policy but
doesn't exist for sync-policy set at zonegroup level.


Thanks,

Soumya


_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux