Re: multi-site replication not syncing metadata

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We encountered broken multisite syncing when using v16.2.6.  There was an issue where the pg_autoscaler defaults were changed, and that resulted in the otp pools not being created on each cluster (and possibly other pools).  You really only notice this pool is missing when you go to sync metadata.  This was fixed in v16.2.7, and I'm not sure if 17.2.0 was affected or not, I don't see that in the release notes.
________________________________
From: Matthew Darwin <bugs@xxxxxxxxxx>
Sent: Monday, July 4, 2022 3:52 PM
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject:  Re: multi-site replication not syncing metadata

I did manage to get this working. Not sure what exactly fixed it, but
creating the pool "default.rgw.otp" helped.  Why are missing pools not
automatically created?

Also this:

radosgw-admin sync status
radosgw-admin metadata sync run

On 2022-06-20 19:26, Matthew Darwin wrote:
> Hi all,
>
> Running into some trouble. I just setup ceph multi-site
> replication.  Good news is that it is syncing the data. But the
> metadata is NOT syncing.
>
> I was trying to follow the instructions from here:
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.ceph.com%2Fen%2Fquincy%2Fradosgw%2Fmultisite%2F%23create-a-secondary-zone&amp;data=05%7C01%7Cmichael.gugino%40uipath.com%7Cae7c3d035b534a17e51908da5df6de8a%7Cd8353d2ab1534d178827902c51f72357%7C0%7C0%7C637925612812818510%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=f7Q1up7GVEnzjmLrR71xq3%2FQlMAQ3%2FewKBZKf%2BkZ8NQ%3D&amp;reserved=0
>
> I see there is open issue on syncing, not sure if this is related:
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fceph%2Fceph%2Fpull%2F46148&amp;data=05%7C01%7Cmichael.gugino%40uipath.com%7Cae7c3d035b534a17e51908da5df6de8a%7Cd8353d2ab1534d178827902c51f72357%7C0%7C0%7C637925612812818510%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=MdAyA9GohMQ0lzhb7AkmLIjDE91kI8e1dwUm%2FP%2BXO9M%3D&amp;reserved=0  I
>
> I'm using: ceph version 17.2.0
> (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)
>
> Any suggestions?
>
> BTW, docs seem a bit out of date. I opened an issue:
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftracker.ceph.com%2Fissues%2F56131&amp;data=05%7C01%7Cmichael.gugino%40uipath.com%7Cae7c3d035b534a17e51908da5df6de8a%7Cd8353d2ab1534d178827902c51f72357%7C0%7C0%7C637925612812818510%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=jthvVncnecR4POa6Sbo5k%2FZRcvpQ803JuybL1y3WVdA%3D&amp;reserved=0
>
>
> $ radosgw-admin sync status (from slave)
>
>          realm bbd51090-42ff-4795-adea-4b9dbaaf573e (XXXX)
>      zonegroup d66ae4f6-c090-40c6-b05f-eeaa9c279e45 (XXXX-1)
>           zone 7bb893c7-23a4-4c86-99f6-71aeec8209d5 (slave-1)
>  metadata sync preparing for full sync
>                full sync: 64/64 shards
>                full sync: 0 entries to sync
>                incremental sync: 0/64 shards
>                metadata is behind on 64 shards
>                behind shards:
> [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
>      data sync source: 661233b5-b3da-4c3f-a401-d6874c11cdb8 (master-1)
>                        syncing
>                        full sync: 0/128 shards
>                        incremental sync: 128/128 shards
>                        data is behind on 9 shards
>                        behind shards: [23,33,36,39,47,83,84,96,123]
>
>                        oldest incremental change not applied:
> 2022-06-20T22:51:43.724881+0000 [23]
>
> $ radosgw-admin sync status (from master)
>           realm bbd51090-42ff-4795-adea-4b9dbaaf573e (XXXX)
>       zonegroup d66ae4f6-c090-40c6-b05f-eeaa9c279e45 (XXXX-1)
>            zone 661233b5-b3da-4c3f-a401-d6874c11cdb8 (master-1)
>   metadata sync no sync (zone is master)
> 2022-06-20T23:03:14.408+0000 7f1e8e309840  0 ERROR: failed to fetch
> datalog info
>       data sync source: 7bb893c7-23a4-4c86-99f6-71aeec8209d5 (slave-1)
>                         failed to retrieve sync info: (13)
> Permission denied
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux