Re: rgw recovering shards

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/24/19 11:00 PM, Frank R wrote:
After an RGW upgrade from 12.2.7 to 12.2.12 for RGW multisite a few days ago the "sync status" has constantly shown a few "recovering shards", ie:

-----

#  radosgw-admin sync status
          realm 8f7fd3fd-f72d-411d-b06b-7b4b579f5f2f (prod)
      zonegroup 60a2cb75-6978-46a3-b830-061c8be9dc75 (prod)
           zone ffce148e-3b24-462d-98bf-8c212de31de5 (us-east-1)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 7fe96e52-d6f7-4ad6-b66e-ecbbbffbc18e (us-east-2)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 1 shards
                        behind shards: [48]
                        oldest incremental change not applied: 2019-10-21 22:34:11.0.293798s
                        5 shards are recovering
                        recovering shards: [11,37,48,110,117]


-----

This is the secondary zone. I am worried about the "oldest incremental change not applied" being from the 21st. Is there a way to have RGW just stop trying to recover these shards and just sync them from this point in time?


Was you run after upgrade new shard-maintenance command (`reshard stale-instances list`, `reshard stale-instances rm`)?



k

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux