On Tue, Feb 5, 2019 at 3:35 PM Ryan <rswagoner@xxxxxxxxx> wrote:
I've been trying to configure the cloud sync module to push changes to an Amazon S3 bucket without success. I've configured the module according to the docs with the trivial configuration settings. Is there an error log I should be checking? Is the "radosgw-admin sync status --rgw-zone=mycloudtierzone" the correct command to check status?Thanks,Ryan
It turns out I can get it to sync as long as I leave "radosgw-admin --rgw-zone=aws-docindex data sync run --source-zone=default" running. I thought with mimic the sync was built into the ceph-radosgw service? I'm running version 13.2.4. I'm also seeing these errors on the console while running that command.
2019-02-05 17:40:10.679 7fb1ef06b680 0 meta sync: ERROR: RGWBackoffControlCR called coroutine returned -2
2019-02-05 17:40:10.694 7fb1ef06b680 0 RGW-SYNC:data:sync:shard[25]: ERROR: failed to read remote data log info: ret=-2
2019-02-05 17:40:10.695 7fb1ef06b680 0 meta sync: ERROR: RGWBackoffControlCR called coroutine returned -2
2019-02-05 17:40:10.711 7fb1ef06b680 0 RGW-SYNC:data:sync:shard[43]: ERROR: failed to read remote data log info: ret=-2
2019-02-05 17:40:10.712 7fb1ef06b680 0 meta sync: ERROR: RGWBackoffControlCR called coroutine returned -2
2019-02-05 17:40:10.720 7fb1ef06b680 0 meta sync: ERROR: RGWBackoffControlCR called coroutine returned -2
Additionally "radosgw-admin --rgw-zone=aws-docindex data sync error list --source-zone=default" is showing numerous error code 39 responses/
"message": "failed to sync bucket instance: (39) Directory not empty"
"message": "failed to sync object(39) Directory not empty"
When it successfully completes I see the following
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: af57fe9a-43a7-4998-9574-4016f5fa6661 (default)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
When I stop the "data sync run" the status will just sit on
data sync source: af57fe9a-43a7-4998-9574-4016f5fa6661 (default)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is behind on 1 shards
behind shards: [75]
oldest incremental change not applied: 2019-02-05 17:44:51.0.367478s
Thanks,
Ryan
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com