Re: radosgw multisite sync - how to fix data behind shards?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'd say almost impossible even if this answer not helpful, I have never been able to clear up those, however the data is there. How about in your situation, are the data on the sync site?

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Wyll Ingersoll <wyllys.ingersoll@xxxxxxxxxxxxxx>
Sent: Wednesday, June 8, 2022 11:42 PM
To: ceph-users@xxxxxxx
Cc: dev@xxxxxxx
Subject:  radosgw multisite sync - how to fix data behind shards?

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Seeking help from a radosgw expert...

I have a 3-zone multisite configuration (all running pacific 16.2.9) with 1 bucket per zone and a couple of small objects in each bucket for testing purposes.
One of the secondary zones cannot get seem to get into sync with the master, sync status reports:


  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: a6ed5947-0ceb-407b-812f-347fab2ef62d (zone-1)
                        syncing
                        full sync: 128/128 shards
                        full sync: 66 buckets to sync
                        incremental sync: 0/128 shards
                        data is behind on 128 shards
                        behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]


I have tried using "data sync init" and restarting the radosgw multiple times, but that does not seem to be helping in any way.

If I manually do "radosgw-admin data sync run --bucket bucket-1" - it just hangs forever and doesn't appear to do anything.  Checking the sync status never shows any improvement in the shards.

It is very hard to figure out what to do as there are a several sync commands -  bucket sync, data sync, metadata sync  - and it is not clear what effect they have or how to properly run them when the syncing gets confused.

Any guidance on how to get out of this situation would be greatly appreciated.  I've read lots of threads on various mailing list archives (via google search) and very few of them have any sort of resolution or recommendation that is confirmed to have fixed these sort of problems.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux