Sorry for the delay, I'm still catching up since the openstack conference. Does the system user for the destination zone exist with the same access secret and key in the source zone? If you enable debug rgw = 30 on the destination you can see why the copy_obj from the source zone is failing. Josh On 11/11/2013 12:52 AM, maoqi1982 wrote:
Hi list ceph version is the latest v0.72 emperor, follow the http://ceph.com/docs/master/radosgw/federated-config/ doc i deploy two ceph cluster (one ceph per datasite ) to form a region (a master zone , a slave zone ). the metadata seem to be sync ok. but the object is failed to sync . the error is as following: INFO:radosgw_agent.worker:6053 is processing shard number 47 INFO:radosgw_agent.worker:finished processing shard 47 ** *INFO:radosgw_agent.sync:48/128 items processed* *INFO:radosgw_agent.worker:6053 is processing shard number 48* ** ** *INFO:radosgw_agent.worker:bucket instance "east-bucket:us-east.4139.1" has 5 entries after "00000000002.2.3"* *INFO:radosgw_agent.worker:syncing bucket "east-bucket"* *ERROR:radosgw_agent.worker:failed to sync object east-bucket/驽?* *?docx: * *ERROR:radosgw_agent.worker:failed to sync object east-bucket/sss.py: state is error* *ERROR:radosgw_agent.worker:failed to sync object east-bucket/Nfg.docx: state is error* ** INFO:radosgw_agent.worker:finished processing shard 48 INFO:radosgw_agent.worker:6053 is processing shard number 49 INFO:radosgw_agent.sync:49/128 items processed INFO:radosgw_agent.sync:50/128 items processed INFO:radosgw_agent.worker:finished processing shard 49 INFO:radosgw_agent.worker:6053 is processing shard number 50 INFO:radosgw_agent.worker:finished processing shard 50 INFO:radosgw_agent.sync:51/128 items processed INFO:radosgw_agent.worker:6053 is processing shard number 51 INFO:radosgw_agent.worker:finished processing shard 51 INFO:radosgw_agent.sync:52/128 items processed INFO:radosgw_agent.worker:6053 is processing shard number 52 INFO:radosgw_agent.sync:53/128 items processed INFO:radosgw_agent.worker:finished processing shard 52 thanks
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com