Le 2025-03-12 19:33, Gilles Mocellin a écrit :
Hello Cephers,
Since I didn't have any progress on my side, I share again my problem,
hoping
for some clues.
---
Since I was not confident in my replication status, I've done a radosgw
sync
init one after the other, in both of my zones.
Since then, there art stale recovering shards.
Incremental replication seems OK.
[...]
Something I've seen :
Some syncs are recovering, for bucket that don't exist anymore :
gmo_admin@fidcl-lyo1-sto-sds-04:~$ sudo radosgw-admin data sync status
--shard-id=27 --source-zone=Z2
{
"shard_id": 27,
"marker": {
"status": "incremental-sync",
"marker":
"G00000000000000000001@00000000000000000011:00000000000003113194",
"next_step_marker": "",
"total_entries": 0,
"pos": 0,
"timestamp": "2025-03-14T14:49:26.071005Z"
},
"pending_buckets": [],
"recovering_buckets": [
"replic_cfn_int/cfb:aefd4003-1866-4b16-b1b3-2f308075cd1c.1287122.1:1[full]",
"replic_cfn_ppr/cfb:aefd4003-1866-4b16-b1b3-2f308075cd1c.9069679.40:1[full]",
"replic_cfn_prod/cfb:aefd4003-1866-4b16-b1b3-2f308075cd1c.10182006.1:1[full]",
"replic_cfn_prod/cfb:aefd4003-1866-4b16-b1b3-2f308075cd1c.8348421.2:1[full]",
"replic_cfn_rec/cfb:aefd4003-1866-4b16-b1b3-2f308075cd1c.6369873.98:1[full]"
],
"current_time": "2025-03-14T15:51:42Z"
}
And :
$ sudo radosgw-admin bucket sync status --bucket="replic_cfn_int/cfb"
Gives nothing. (It does on other buckets).
Bucket stats
$ sudo radosgw-admin bucket stats --bucket=replic_cfn_int/cfb
failure: (2002) Unknown error 2002:
That bucket (in several tenants/envs) was our first try, before moving
to creat many buckets to spread the objects.
At the time it was deleted, we didn't have multisite, and it was on Z2,
octopus version.
Since, we have enabled multisite with Z1, and switch master on Z1,
migrate to pacific, then quincy and reef lately.
That can be part of the problem if there is remaining stale
objects/omaps.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx