rgw multisite sync not syncing data, error: RGW-SYNC:data:init_data_sync_status: ERROR: failed to read remote data log shards

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey ceph-users,


I setup a multisite sync between two freshly setup Octopus clusters.
In the first cluster I created a bucket with some data just to test the replication of actual data later.

I then followed the instructions on https://docs.ceph.com/en/octopus/radosgw/multisite/#migrating-a-single-site-system-to-multi-site to add a second zone.

Things went well and both zones are now happily reaching each other and the API endpoints are talking. Also the metadata is in sync already - both sides are happy and I can see bucket listings and users are "in sync":


# radosgw-admin sync status
          realm 13d1b8cb-dc76-4aed-8578-2ce5d3d010e8 (obst)
      zonegroup 17a06c15-2665-484e-8c61-cbbb806e11d2 (obst-fra)
           zone 6d2c1275-527e-432f-a57a-9614930deb61 (obst-rgn)
  metadata sync no sync (zone is master)
      data sync source: c07447eb-f93a-4d8f-bf7a-e52fade399f3 (obst-az1)
                        init
                        full sync: 128/128 shards
                        full sync: 0 buckets to sync
                        incremental sync: 0/128 shards
                        data is behind on 128 shards
                        behind shards: [0...127]


and on the other side ...

# radosgw-admin sync status
          realm 13d1b8cb-dc76-4aed-8578-2ce5d3d010e8 (obst)
      zonegroup 17a06c15-2665-484e-8c61-cbbb806e11d2 (obst-fra)
           zone c07447eb-f93a-4d8f-bf7a-e52fade399f3 (obst-az1)
  metadata sync syncing
                full sync: 0/64 shards
                incremental sync: 64/64 shards
                metadata is caught up with master
      data sync source: 6d2c1275-527e-432f-a57a-9614930deb61 (obst-rgn)
                        init
                        full sync: 128/128 shards
                        full sync: 0 buckets to sync
                        incremental sync: 0/128 shards
                        data is behind on 128 shards
                        behind shards: [0...127]



also the newly created buckets (read: their metadata) is synced.



What is apparently not working in the sync of actual data.

Upon startup the radosgw on the second site shows:

2021-06-25T16:15:06.445+0000 7fe71eff5700  1 RGW-SYNC:meta: start
2021-06-25T16:15:06.445+0000 7fe71eff5700  1 RGW-SYNC:meta: realm epoch=2 period id=f4553d7c-5cc5-4759-9253-9a22b051e736 2021-06-25T16:15:11.525+0000 7fe71dff3700  0 RGW-SYNC:data:sync:init_data_sync_status: ERROR: failed to read remote data log shards


also when issuing

# radosgw-admin data sync init --source-zone obst-rgn

it throws

2021-06-25T16:20:29.167+0000 7f87c2aec080 0 RGW-SYNC:data:init_data_sync_status: ERROR: failed to read remote data log shards





Does anybody have any hints on where to look for what could be broken here?

Thanks a bunch,
Regards


Christian





_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux