Re: Ceph Radosgw multi zone data replication failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> [root@us-east-1 ceph]# ceph -s --name client.radosgw.us-east-1

> [root@us-east-1 ceph]# ceph -s --name client.radosgw.us-west-1


Are you trying to setup two zones on one cluster?  That's possible, but you'll also want to spend some time on your CRUSH map making sure that the two zones are as independent as possible (no shared disks, etc).

Are you using Civetweb or Apache + FastCGI?

Can you include the output (from both clusters):
radosgw-admin --name=client.radosgw.us-east-1 region get
radosgw-admin --name=client.radosgw.us-east-1 zone get

Double check that both system users exist in both clusters, with the same secret.




On Sun, Apr 26, 2015 at 8:01 AM, Vickey Singh <vickey.singh22693@xxxxxxxxx> wrote:

Hello Geeks


I am trying to setup Ceph Radosgw multi site data replication using official documentation http://ceph.com/docs/master/radosgw/federated-config/#multi-site-data-replication


Everything seems to work except radosgw-agent sync , Request you to please check the below outputs and help me in any possible way.


Environment : 


CentOS 7.0.1406

Ceph Versino 0.87.1

Rados Gateway configured using Civetweb



Radosgw zone list : Works nicely 


[root@us-east-1 ceph]# radosgw-admin zone list --name client.radosgw.us-east-1

{ "zones": [

        "us-west",

        "us-east"]}

[root@us-east-1 ceph]#


Curl request to master zone : Works nicely 


[root@us-east-1 ceph]# curl http://us-east-1.crosslogic.com:7480

<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

[root@us-east-1 ceph]#


Curl request to secondary zone : Works nicely 


[root@us-east-1 ceph]# curl http://us-west-1.crosslogic.com:7480

<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

[root@us-east-1 ceph]#


Rados Gateway agent configuration file : Seems correct, no TYPO errors


[root@us-east-1 ceph]# cat cluster-data-sync.conf

src_access_key: M7QAKDH8CYGTK86CG93U

src_secret_key: 0xQR6PINk23W\/GYrWJ14aF+1stG56M6xMkqkdloO

destination: http://us-west-1.crosslogic.com:7480

dest_access_key: ZQ32ES1WAWPG05YMZ7T7

dest_secret_key: INvk8AkrZRsejLEL34yRpMLmOqydt8ncOXy4RHCM

log_file: /var/log/radosgw/radosgw-sync-us-east-west.log

[root@us-east-1 ceph]#


Rados Gateway agent SYNC : Fails , however it can fetch region map so i think src and dest KEYS are correct. But don't know why it fails on AttributeError 


[root@us-east-1 ceph]# radosgw-agent -c cluster-data-sync.conf

region map is: {u'us': [u'us-west', u'us-east']}

Traceback (most recent call last):

  File "/usr/bin/radosgw-agent", line 21, in <module>

    sys.exit(main())

  File "/usr/lib/python2.7/site-packages/radosgw_agent/cli.py", line 275, in main

    except client.ClientException as e:

AttributeError: 'module' object has no attribute 'ClientException'

[root@us-east-1 ceph]#


Can query to Ceph cluster using us-east-1 ID


[root@us-east-1 ceph]# ceph -s --name client.radosgw.us-east-1

    cluster 9609b429-eee2-4e23-af31-28a24fcf5cbc

     health HEALTH_OK

     monmap e3: 3 mons at {ceph-node1=192.168.1.101:6789/0,ceph-node2=192.168.1.102:6789/0,ceph-node3=192.168.1.103:6789/0}, election epoch 448, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3

     osdmap e1063: 9 osds: 9 up, 9 in

      pgmap v8473: 1500 pgs, 43 pools, 374 MB data, 2852 objects

            1193 MB used, 133 GB / 134 GB avail

                1500 active+clean

[root@us-east-1 ceph]#


Can query to Ceph cluster using us-west-1 ID


[root@us-east-1 ceph]# ceph -s --name client.radosgw.us-west-1

    cluster 9609b429-eee2-4e23-af31-28a24fcf5cbc

     health HEALTH_OK

     monmap e3: 3 mons at {ceph-node1=192.168.1.101:6789/0,ceph-node2=192.168.1.102:6789/0,ceph-node3=192.168.1.103:6789/0}, election epoch 448, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3

     osdmap e1063: 9 osds: 9 up, 9 in

      pgmap v8473: 1500 pgs, 43 pools, 374 MB data, 2852 objects

            1193 MB used, 133 GB / 134 GB avail

                1500 active+clean

[root@us-east-1 ceph]#


Hope these packages are correct


[root@us-east-1 ceph]# rpm -qa | egrep -i "ceph|radosgw"

libcephfs1-0.87.1-0.el7.centos.x86_64

ceph-common-0.87.1-0.el7.centos.x86_64

python-ceph-0.87.1-0.el7.centos.x86_64

ceph-radosgw-0.87.1-0.el7.centos.x86_64

ceph-release-1-0.el7.noarch

ceph-0.87.1-0.el7.centos.x86_64

radosgw-agent-1.2.1-0.el7.centos.noarch

[root@us-east-1 ceph]#



Regards

VS


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux