[RGW] Full replication gives stale recovering shard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Cephers,
 
Since I was not confident in my replication status, I've done a radosgw sync init one after the other, in both of my zones.
Since then, there art stale recovering shards.
Incremental replication seems OK.
 
Cluster A sync status: 
 
realm 421c9bbd-83cc-4d85-a25a-ca847f225bfe (R)
zonegroup 2c5ccf52-88be-4b2f-b4b9-c55f73bbacd1 (ZG)
zone 3e3b0a2b-1864-4923-93a6-237a13b51594 (Z1)
current time 2025-02-06T19:03:10Z
zonegroup features enabled: resharding
disabled: compress-encrypted
metadata sync no sync (zone is master)
data sync source: aefd4003-1866-4b16-b1b3-2f308075cd1c (Z2)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is behind on 3 shards
behind shards: [65,66,77]
oldest incremental change not applied: 2025-02-06T20:02:53.441838+0100 [66]
111 shards are recovering
recovering shards: [0,1,2,3,4,5,6,7,8,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,
54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,
102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
 
Cluster B sync status:
 
realm 421c9bbd-83cc-4d85-a25a-ca847f225bfe (R)
zonegroup 2c5ccf52-88be-4b2f-b4b9-c55f73bbacd1 (ZG)
zone aefd4003-1866-4b16-b1b3-2f308075cd1c (Z2)
current time 2025-02-06T19:03:37Z
zonegroup features enabled: resharding
disabled: compress-encrypted
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 3e3b0a2b-1864-4923-93a6-237a13b51594 (Z1)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is behind on 7 shards
behind shards: [58,59,60,62,64,66,67]
oldest incremental change not applied: 2025-02-06T20:03:27.831651+0100 [60]
81 shards are recovering
recovering shards: [0,1,2,3,4,5,6,7,8,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,
54,57,58,59,60,61,62,63,64,65,66,67,68,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123
,124,126,127]
 
Restarting full sync does nothing, except eventually adding recovering shards !

I also have some large omap objects in Z1.rgw.log pool.
Where I see a lot of objects with names beginning by bucket.sync-status.*.
 
What can I do to remove those recovering shards ?
Some trimming (bilog ?), deleting objects in Z1.rgw.log pool ?
 
--  
Gilles
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux