radosgw-admin sync status takes ages to print output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have a 3 DC octopus Multisite setup with bucket sync policy applied.

I have 2 buckets where I’ve set the shard 24.000 and the other is 9.000 because they want to use 1 bucket but with a huge amount of objects (2.400.000.000 and 900.000.000) and in case of multisite we need to preshard the buckets as it is in the documentation.

Do I need to fine tune something on the syncing to make this query faster?
This is the output after 5-10 minutes query time not sure is it healthy or good or not to be honest, haven’t really find any good explanation about the output in the ceph documentation.

>From the master zone I can’r reallt even query because timed out, but in secondary zone can see this:


radosgw-admin sync status
          realm 5fd28798-9195-44ac-b48d-ef3e95caee48 (realm)
      zonegroup 31a5ea05-c87a-436d-9ca0-ccfcbad481e3 (data)
           zone 61c9d940-fde4-4bed-9389-edc8d7741817 (sin)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 9213182a-14ba-48ad-bde9-289a1c0c0de8 (hkg)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 128 shards
                        behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
                        oldest incremental change not applied: 2021-01-14T12:01:00.131104+0700 [11]
                source: f20ddd64-924b-4f78-8d2d-dd6c65f98ba9 (ash)
                        syncing
                        full sync: 0/128 shards
                        incremental sync: 128/128 shards
                        data is behind on 128 shards
                        behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
                        oldest incremental change not applied: 2021-01-14T12:05:26.879014+0700 [98]



Hope I can find some expert in the multisite area 😊

Thank you in advance.

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux