Hello Christopher, We had something similar on Pacific multi-site. The problem was in leftover bucket metadata in our case, and was solved by "radosgw-admin metadata list ..." and "radosgw-admin metadata rm ..." on master, for a non-existent bucket. Best regards, Konstantin On Tue, 2024-04-30 at 21:42 +0000, Christopher Durham wrote: > > Hi, > I have a reef cluster 18.2.2 on Rocky 8.9. This cluster has been > upgraded from pacific->quincy->reef over the past few years. It is a > multi site with one other cluster that works fine with s3/radosgw on > both sides, with proper bidirectional data replication. > On one of the master cluster's radosgw logs, I noticed a sync request > regarding a deleted bucket. I am not sure when this error started, > but I know that the bucket in question was deleted a long time before > the upgrade to reef. Perhapsthis error existed prior to reef, I do > not know. Here is the error in the radosgw log: > :get_bucket_index_log_status ERROR: > rgw_read_bucket_full_sync_status() on pipe{s={b=BUCKET_NAME:CLUSTERID > ..., z=...., az= ...},d={b=..,az=...}} returned ret=-2 > My understanding: > s=source, d=destination, each of which is a tuple with the > appropriate info necessary.... > > This happens for BUCKET_NAME every few minutes. Said bucket does not > exist on either side of the multisite, but did in the past. > Any way I can force radosgw to stop trying to replicate? > Thanks > -Chris > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx