Re: Federated gateways

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Craig:
  I used 10 VMs for federated gateway testing. There are 5 nodes for us-east, and the others are for us-west. The two zones are independent.
  Before the configurations of the region and zone, I have the two zones with the same 'client.radosgw.[zone]' setting of ceph.conf.
-----------------------------------------------------------------------------------------
[client.radosgw.us-east-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-east
rgw zone root pool = .us-east.rgw.root
host = node1-east
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /tmp/radosgw.us-east.sock
log file = /var/log/ceph/radosgw.us-east.log
rgw dns name = node1-east.ceph.com

[client.radosgw.us-west-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-west
rgw zone root pool = .us-west.rgw.root
host = node1-west
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /tmp/radosgw.us-west.sock
log file = /var/log/ceph/radosgw.us-west.log
rgw dns name = node1-west.ceph.com
-----------------------------------------------------------------------------------------

  The needed pools were created by manual. And I also created a normal user in the two zones with same access key and secret key. After that, I used 's3cmd' to create a bucket as 'BUCKETa' and put 'ceph.conf' in the bucket to test the synchronization. There was some error message in the logging file of 'radosgw-agent'. I found the file of 'ceph.conf' was sent to the secondary zone and DELETED later by unknown reason.
-----------------------------------------------------------------------------------------
application/json; charset=UTF-8
Fri, 06 Nov 2015 09:18:01 GMT
x-amz-copy-source:BUCKETa/ceph.conf
/BUCKETa/ceph.conf
2015-11-06 17:18:01,174 4558 [boto][DEBUG ] Signature:
AWS us_access_key:p6+AscqnndOpcWfJMBO7ADDpNek=
2015-11-06 17:18:01,175 4558 [boto][DEBUG ] url = 'http://node1-west.ceph.com/BUCKETa/ceph.conf'
params={'rgwx-op-id': 'admin1:4463:2', 'rgwx-source-zone': u'us-east', 'rgwx-client-id': 'radosgw-agent'}
headers={'Content-Length': '0', 'User-Agent': 'Boto/2.20.1 Python/2.7.6 Linux/3.13.0-66-generic', 'x-amz-copy-source': 'BUCKETa/ceph.conf', 'Date': 'Fri, 06 Nov 2015 09
:18:01 GMT', 'Content-Type': 'application/json; charset=UTF-8', 'Authorization': 'AWS us_access_key:p6+AscqnndOpcWfJMBO7ADDpNek='}
data=None
2015-11-06 17:18:01,175 4558 [boto][DEBUG ] Method: PUT
2015-11-06 17:18:01,175 4558 [boto][DEBUG ] Path: /BUCKETa/ceph.conf?rgwx-op-id=admin1%3A4463%3A2&rgwx-source-zone=us-east&rgwx-client-id=radosgw-agent
2015-11-06 17:18:01,175 4558 [boto][DEBUG ] Data:
2015-11-06 17:18:01,176 4558 [boto][DEBUG ] Headers: {'Content-Type': 'application/json; charset=UTF-8', 'x-amz-copy-source': 'BUCKETa/ceph.conf'}
2015-11-06 17:18:01,176 4558 [boto][DEBUG ] Host: node1-west.ceph.com
2015-11-06 17:18:01,176 4558 [boto][DEBUG ] Port: 80
2015-11-06 17:18:01,176 4558 [boto][DEBUG ] Params: {'rgwx-op-id': 'admin1%3A4463%3A2', 'rgwx-source-zone': 'us-east', 'rgwx-client-id': 'radosgw-agent'}
2015-11-06 17:18:01,177 4558 [boto][DEBUG ] Token: None
2015-11-06 17:18:01,177 4558 [boto][DEBUG ] StringToSign:
PUT

application/json; charset=UTF-8
Fri, 06 Nov 2015 09:18:01 GMT
x-amz-copy-source:BUCKETa/ceph.conf
/BUCKETa/ceph.conf
2015-11-06 17:18:01,177 4558 [boto][DEBUG ] Signature:
AWS us_access_key:p6+AscqnndOpcWfJMBO7ADDpNek=
2015-11-06 17:18:01,203 4558 [radosgw_agent.worker][DEBUG ] object "BUCKETa/ceph.conf" not found on master, deleting from secondary
2015-11-06 17:18:01,203 4558 [boto][DEBUG ] path=/BUCKETa/
2015-11-06 17:18:01,203 4558 [boto][DEBUG ] auth_path=/BUCKETa/
2015-11-06 17:18:01,203 4558 [boto][DEBUG ] path=/BUCKETa/?max-keys=0
2015-11-06 17:18:01,204 4558 [boto][DEBUG ] auth_path=/BUCKETa/?max-keys=0
2015-11-06 17:18:01,204 4558 [boto][DEBUG ] Method: GET
2015-11-06 17:18:01,204 4558 [boto][DEBUG ] Path: /BUCKETa/?max-keys=0
2015-11-06 17:18:01,204 4558 [boto][DEBUG ] Data:
2015-11-06 17:18:01,204 4558 [boto][DEBUG ] Headers: {}
2015-11-06 17:18:01,205 4558 [boto][DEBUG ] Host: node1-west.ceph.com
2015-11-06 17:18:01,205 4558 [boto][DEBUG ] Port: 80
2015-11-06 17:18:01,205 4558 [boto][DEBUG ] Params: {}
2015-11-06 17:18:01,206 4558 [boto][DEBUG ] establishing HTTP connection: kwargs={'port': 80, 'timeout': 70}
2015-11-06 17:18:01,206 4558 [boto][DEBUG ] Token: None
2015-11-06 17:18:01,206 4558 [boto][DEBUG ] StringToSign:
-----------------------------------------------------------------------------------------

Any help would be much appreciated.

Best Regards,
Wdhwang


-----Original Message-----
From: Craig Lewis [mailto:clewis@xxxxxxxxxxxxxxxxxx] 
Sent: Saturday, November 07, 2015 3:59 AM
To: WD Hwang/WHQ/Wistron
Cc: Ceph Users
Subject: Re:  Federated gateways

You are updating [radosgw-admin] in ceph.conf, in steps 1.4 and 2.4?

I recall restarting things more often.  IIRC, I would restart everything after every regionmap update or a ceph.conf update.

I manually created every pool that was mentioned in the region and zone .json files.




On Thu, Nov 5, 2015 at 6:01 PM,  <WD_Hwang@xxxxxxxxxxx> wrote:
> Hi Craig,
>
> I am testing the federated gateway of 1 region with 2 zones. And I 
> found only metadata is replicated, the data is NOT.
>
> According to your check list, I am sure all thinks are checked. Could 
> you review my configuration scripts? The configuration files are 
> similar to http://docs.ceph.com/docs/master/radosgw/federated-config/.
>
>
>
> 1. For the master zone with 5 nodes of the region
>
> (1) create keyring
>
>   sudo ceph-authtool --create-keyring 
> /etc/ceph/ceph.client.radosgw.keyring
>
>   sudo chmod +r /etc/ceph/ceph.client.radosgw.keyring
>
>   sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n
> client.radosgw.us-east-1 --gen-key
>
>   sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n
> client.radosgw.us-west-1 --gen-key
>
>   sudo ceph-authtool -n client.radosgw.us-east-1 --cap osd 'allow rwx' 
> --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
>
>   sudo ceph-authtool -n client.radosgw.us-west-1 --cap osd 'allow rwx' 
> --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
>
>   sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add
> client.radosgw.us-east-1 -i /etc/ceph/ceph.client.radosgw.keyring
>
>   sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add
> client.radosgw.us-west-1 -i /etc/ceph/ceph.client.radosgw.keyring
>
>
>
> (2) modify Ceph cluster configurations and synchronize it
>
>   ceph-deploy --overwrite-conf config push node1 node2 node3 node4 
> node5
>
>
>
> (3) configure Apache
>
> (4) configure region
>
>   sudo apt-get install -y radosgw radosgw-agent python-pip
>
>   sudo radosgw-admin region set --infile /home/ceph/us.json --name
> client.radosgw.us-east-1
>
>   sudo radosgw-admin region set --infile /home/ceph/us.json --name
> client.radosgw.us-west-1
>
>   sudo rados -p .us.rgw.root rm region_info.default
>
>   sudo radosgw-admin region default --rgw-region=us --name
> client.radosgw.us-east-1
>
>   sudo radosgw-admin region default --rgw-region=us --name
> client.radosgw.us-west-1
>
>   sudo radosgw-admin regionmap update --name client.radosgw.us-east-1
>
>   sudo radosgw-admin regionmap update --name client.radosgw.us-west-1
>
>
>
> (5) create master zone
>
>   sudo radosgw-admin zone set --rgw-zone=us-east --infile 
> /home/ceph/us-east.json --name client.radosgw.us-east-1
>
>   sudo radosgw-admin zone set --rgw-zone=us-east --infile 
> /home/ceph/us-east.json --name client.radosgw.us-west-1
>
>   sudo radosgw-admin zone set --rgw-zone=us-west --infile 
> /home/ceph/us-west.json --name client.radosgw.us-east-1
>
>   sudo radosgw-admin zone set --rgw-zone=us-west --infile 
> /home/ceph/us-west.json --name client.radosgw.us-west-1
>
>   sudo rados -p .rgw.root rm zone_info.default
>
>   sudo radosgw-admin regionmap update --name client.radosgw.us-east-1
>
>   sudo radosgw-admin regionmap update --name client.radosgw.us-west-1
>
>
>
> (6) create master zone’s users
>
>   sudo radosgw-admin user create --uid="us-east" 
> --display-name="Region-us Zone-East" --name client.radosgw.us-east-1 
> --system --access_key=us_access_key --secret=us_secret_key
>
>   sudo radosgw-admin user create --uid="us-west" 
> --display-name="Region-us Zone-West" --name client.radosgw.us-west-1 
> --system --access_key=us_access_key --secret=us_secret_key
>
>
>
> (7) restart Ceph & apache2 & radosgw services
>
>
>
>
>
>
>
> 2. For the secondary zone with 5 nodes of the region
>
> (1) copy the keyring file 'ceph.client.radosgw.keyring' from master 
> zone and import the keyring
>
>   sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add
> client.radosgw.us-east-1 -i /etc/ceph/ceph.client.radosgw.keyring
>
>   sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add
> client.radosgw.us-west-1 -i /etc/ceph/ceph.client.radosgw.keyring
>
>
>
> (2) modify Ceph cluster configurations and synchronize it
>
> ceph-deploy --overwrite-conf config push node1 node2 node3 node4 node5
>
>
>
> (3) configure Apache
>
> (4) copy infile '/home/ceph/us.json' for the master zone and create 'us'
> region
>
>   sudo apt-get install -y radosgw radosgw-agent python-pip
>
>   sudo radosgw-admin region set --infile /home/ceph/us.json --name
> client.radosgw.us-east-1
>
>   sudo radosgw-admin region set --infile /home/ceph/us.json --name
> client.radosgw.us-west-1
>
>   sudo radosgw-admin region default --rgw-region=us --name
> client.radosgw.us-east-1
>
>   sudo radosgw-admin region default --rgw-region=us --name
> client.radosgw.us-west-1
>
>   sudo radosgw-admin regionmap update --name client.radosgw.us-east-1
>
>   sudo radosgw-admin regionmap update --name client.radosgw.us-west-1
>
>
>
> (5) create secondary one
>
>   sudo radosgw-admin zone set --rgw-zone=us-east --infile 
> /home/ceph/us-east.json --name client.radosgw.us-east-1
>
>   sudo radosgw-admin zone set --rgw-zone=us-east --infile 
> /home/ceph/us-east.json --name client.radosgw.us-west-1
>
>   sudo radosgw-admin zone set --rgw-zone=us-west --infile 
> /home/ceph/us-west.json --name client.radosgw.us-east-1
>
>   sudo radosgw-admin zone set --rgw-zone=us-west --infile 
> /home/ceph/us-west.json --name client.radosgw.us-west-1
>
>   sudo rados -p .rgw.root rm zone_info.default
>
>   sudo radosgw-admin regionmap update --name client.radosgw.us-east-1
>
>   sudo radosgw-admin regionmap update --name client.radosgw.us-west-1
>
>
>
> (6) create secondary zone’s users
>
>   sudo radosgw-admin user create --uid="us-east" 
> --display-name="Region-us Zone-East" --name client.radosgw.us-east-1 
> --system --access_key=us_access_key --secret=us_secret_key
>
>   sudo radosgw-admin user create --uid="us-west" 
> --display-name="Region-us Zone-West" --name client.radosgw.us-west-1 
> --system --access_key=us_access_key --secret=us_secret_key
>
>
>
> (7) restart Ceph & apache2 & radosgw services
>
>
>
> Any help would be much appreciated.
>
>
>
> Best Regards,
>
> wdhwang
>
> ----------------------------------------------------------------------
> ----------------------------------------------------------------------
> -------------------
>
> This email contains confidential or legally privileged information and 
> is for the sole use of its intended recipient.
>
> Any unauthorized review, use, copying or distribution of this email or 
> the content of this email is strictly prohibited.
>
> If you are not the intended recipient, you may reply to the sender and 
> should delete this e-mail immediately.
>
> ----------------------------------------------------------------------
> ----------------------------------------------------------------------
> -------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux