Sorry for the late response and thank you to pickup my question. I wanted to create some detailed information, here it is, please have a look and if you could help me I'd very appreciate it. https://tracker.ceph.com/issues/49075 -----Original Message----- From: Eugen Block <eblock@xxxxxx> Sent: Monday, February 1, 2021 4:59 PM To: ceph-users@xxxxxxx Subject: [Suspicious newsletter] Re: Multisite recovering shards Email received from outside the company. If in doubt don't click links nor open attachments! ________________________________ Hi, > We are using octopus 15.2.7 for bucket sync with symmetrical replication. replication is asynchronous with both CephFS and RGW, so if your clients keep writing new data into the cluster as you state the sync status will always stay behind a little bit. I have two one-node test clusters with no client traffic where the sync status is actually up-to-date: siteb:~ # radosgw-admin sync status realm c7d5fd30-9c06-46a1-baf4-497f95bf3abc (masterrealm) zonegroup 68adec15-aace-403d-bd63-f5182a6437b1 (master-zonegroup) zone 69329911-c3b0-48c3-a359-7f6214e0480c (siteb-zone) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 0fb33fa1-8110-4179-ae45-acf5f5f825c5 (sitea-zone) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source Zitat von "Szabo, Istvan (Agoda)" <Istvan.Szabo@xxxxxxxxx>: > Hi, > > I’ve never seen in our multisite sync status healthy output, almost > all the sync shards are recovering. > > What can I do with recovering shards? > > We have 1 realm, 1 zonegroup and inside the zonegroup we have 3 zones > in 3 different geo location. > > We are using octopus 15.2.7 for bucket sync with symmetrical replication. > > The user is at the moment migrating their data and the sites are > always behind which is replicated from the place where it was > uploaded. > > I’ve restarted all rgw and disable / enable bucket sync, it started to > work, but I think when it comes to close sync it will stop again due > to the recovering shards. > > Any idea? > > Thank you > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by > copyright or other legal rules. If you have received it by mistake > please let us know by reply email and delete it from your system. It > is prohibited to copy this message or disclose its content to anyone. > Any confidentiality or privilege is not waived or lost by any mistaken > delivery or unauthorized disclosure of the message. All messages sent > to and from Agoda may be monitored to ensure compliance with company > policies, to protect the company's interests and to remove potential > malware. Electronic messages may be intercepted, amended, lost or > deleted, or contain viruses. > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an > email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx