Issues with federated gateway sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Okay, I created a new user and had it sync successfully.

I created a bucket and it replicated correctly.

Uploaded file successfully to the master zone, yet still getting an error
like this:

application/json; charset=UTF-8
Tue, 22 Jul 2014 20:08:26 GMT
x-amz-copy-source:testfolder%2F12g_bios_tuning_for_performance_power.pdf
/testfolder/12g_bios_tuning_for_performance_power.pdf
2014-07-22T15:08:26.987 9407:DEBUG:boto:url = '
http://10.30.3.178/testfolder/12g_bios_tuning_for_performance_power.pdf'
params={'rgwx-op-id': 'storage1:8432:1', 'rgwx-source-zone': u'us-west',
'rgwx-client-id': 'radosgw-agent'}
headers={'Content-Length': '0', 'User-Agent': 'Boto/2.2.2 (linux2)',
'x-amz-copy-source':
'testfolder%2F12g_bios_tuning_for_performance_power.pdf', 'Date': 'Tue, 22
Jul 2014 20:08:26 GMT', 'Content-Type': 'application/json; charset=UTF-8',
'Authorization': 'AWS E5Q306BGYZ4RETMOEWYA:V979FIGM7vGZNLW0SmTVokZJ61U='}
data=None
2014-07-22T15:08:26.988 9407:INFO:urllib3.connectionpool:Starting new HTTP
connection (1): 10.30.3.178
2014-07-22T15:08:27.288 9407:DEBUG:urllib3.connectionpool:"PUT
/testfolder/12g_bios_tuning_for_performance_power.pdf?rgwx-op-id=storage1%3A8432%3A1&rgwx-source-zone=us-west&rgwx-client-id=radosgw-agent
HTTP/1.1" 403 78
2014-07-22T15:08:27.288 9407:DEBUG:radosgw_agent.worker:exception during
sync: Http error code 403 content <?xml version="1.0"
encoding="UTF-8"?><Error><Code>AccessDenied</Code></Error>
2014-07-22T15:08:27.289 9407:DEBUG:boto:StringToSign:
GET

I'm at a loss, since the bucket syncs over okay. Any other ideas to check?

Thanks again!

Justice


On Tue, Jul 22, 2014 at 11:22 AM, Bachelder, Kurt <
Kurt.Bachelder at sierra-cedar.com> wrote:

>  One thing I noticed was that you were trying to write a file when you
> got the HTTP-403 ? was the bucket created in the master zone and then
> replicated to the slave zone?  Or did the bucket already exist in both?
> Or??
>
>
>
> Which user are you using to PUT the data?  System user or Non-System
> user?  Was the user create in the master zone and allowed to replicate to
> the slave?
>
>
>
> If you?re sure the keys and endpoints are correct, maybe do this:
>
>
>
> -          Startup both zones
>
> -          Startup replication
>
> -          Create a NEW NON-SYSTEM user in the master zone, and allow it
> to replicate to the slave zone.
>
> o   Use radosgw-admin to get the user-info from the slave zone to ensure
> the user was created.
>
> -          Create a new bucket as the user in the master zone, and allow
> it to replicate to the slave zone.
>
> o   Connect via an S3 client to the slave zone as the non-system user to
> see if the bucket is there.
>
> -          Retry the PUT to the master zone as the non-system user and
> allow it to replicate to the slave zone.
>
> o   Connect via an S3 client to the slave zone as the non-system user to
> see if the data is there.
>
>
>
> If you have issues, let us know at which step and we can go from there!
>
>
>
> K
>
>
>
> *From:* Justice London [mailto:justice at ministrycentered.com]
> *Sent:* Tuesday, July 22, 2014 12:27 PM
> *To:* Bachelder, Kurt
> *Cc:* Yehuda Sadeh; ceph-users at lists.ceph.com
>
> *Subject:* Re: [ceph-users] Issues with federated gateway sync
>
>
>
> Unfortunately I had tried that and was able to create a bucket on both
> ends and add an item to it with the user. Here are my us, us-west and
> us-east configs: http://pastebin.com/QwnKvvwD
>
> The us configuration is the same between east and west (west is supposed
> to be the master zone). I have also double-checked and both east/west users
> have system user permissions set.
>
> Thanks for the help so far!
>
> Justice
>
>
>
>
>
> On Tue, Jul 22, 2014 at 6:09 AM, Bachelder, Kurt <
> Kurt.Bachelder at sierra-cedar.com> wrote:
>
>  We had similar issues with our configuration ? since you?re getting an
> HTTP 403, it seems like something is misconfigured with the system account,
> or with the destination zone.  I would recommend using the API/Secret keys
> with an S3 client (Cloudberry Explorer, s3cmd, or whatever) to do some
> troubleshooting against your destination zone.  The ?system? flag allows
> that specific user to write data to a non-master zone (non system users get
> an HTTP 403, so double check that the system flag is on with a
> radosgw-admin user info).  If that is set and you?re still getting an
> HTTP-403, there?s likely an issue with the slave zone configuration/region
> map.
>
>
>
> Once you get to the point where your system user is able to write to the
> slave zone directly (without  sync), sync should work for you without
> issues.
>
>
>
> Kurt
>
>
>
> *From:* ceph-users [mailto:ceph-users-bounces at lists.ceph.com] *On Behalf
> Of *Justice London
> *Sent:* Monday, July 21, 2014 4:36 PM
> *To:* Yehuda Sadeh
> *Cc:* ceph-users at lists.ceph.com
> *Subject:* Re: [ceph-users] Issues with federated gateway sync
>
>
>
> I did. It was created as such on the east/west location (per the example
> FG configuration):
>
> radosgw-admin user create --uid="us-east" --display-name="Region-US Zone-East" --name client.radosgw.us-east-1 --system
>
> radosgw-admin user create --uid="us-west" --display-name="Region-US Zone-West" --name client.radosgw.us-west-1 --system
>
> Also, sorry, the zone names in the default.conf are us-west and us-east.
>
>
>
> This is also logged on the radosgw-agent log:
>
> Mon, 21 Jul 2014 20:26:20 GMT
> x-amz-copy-source:testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
> /testfolder/ArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
>
>
>
> 2014-07-21T15:26:20.598 24627:DEBUG:boto:url = 'http://10.30.3.178/testfolder/ArcherC7v1_en_3_13_34_up_boot%28140402%29.bin'
>
>
>
> params={'rgwx-op-id': 'storage1:24575:1', 'rgwx-source-zone': u'us-west', 'rgwx-client-id': 'radosgw-agent'}
> headers={'Content-Length': '0', 'User-Agent': 'Boto/2.2.2 (linux2)', 'x-amz-copy-source': 'testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin', 'Date': 'Mon, 21 Jul 2014 20:26:20 GMT', 'Content-Type': 'application/json; charset=UTF-8', 'Authorization': 'AWS <sync_id>:<sync_key>
>
>
>
> data=None
> 2014-07-21T15:26:20.599 24627:INFO:urllib3.connectionpool:Starting new HTTP connection (1): 10.30.3.178
> 2014-07-21T15:26:20.925 24627:DEBUG:urllib3.connectionpool:"PUT /testfolder/ArcherC7v1_en_3_13_34_up_boot%28140402%29.bin?rgwx-op-id=storage1%3A24575%3A1&rgwx-source-zone=us-west&rgwx-client-id=radosgw-agent HTTP/1.1" 403 78
>
>
>
> 2014-07-21T15:26:20.925 24627:DEBUG:radosgw_agent.worker:exception during sync: Http error code 403 content <?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code></Error>
>
>
>
> 2014-07-21T15:26:20.926 24627:DEBUG:boto:StringToSign:
> GET
>
>
>
> Justice
>
>
>
>
>
> On Mon, Jul 21, 2014 at 1:28 PM, Yehuda Sadeh <yehuda at redhat.com> wrote:
>
>  On Mon, Jul 21, 2014 at 1:07 PM, Justice London
> <justice at ministrycentered.com> wrote:
> > Hello, I am having issues getting FG working between east/west
> data-center
> > test configurations. I have the sync default.conf configured like this:
> >
> > source: "http://10.20.2.39:80";
> > src_zone: "us-west-1"
> > src_access_key: <src_key>
> > src_secret_key: <src_key)
> > destination: "http://10.30.3.178:80";
> > dest_zone: "us-east-1"
> > dest_access_key: <dest_key>
> > dest_secret_key: <dest_key)
> > log_file: /var/log/radosgw/radosgw-sync-us-east-west.log
> >
> > No real errors are logged on the agent end, but I see the following in
> the
> > remove radosgw end:
> > 2014-07-21 15:01:13.346569 7fc5deffd700  1 ====== starting new request
> > req=0x7fc5e000fcf0 =====
> > 2014-07-21 15:01:13.346947 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.3.178:6800/3700 -- osd_op(client.7160.0:450
> > testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin [call
> > version.read,getxattrs,stat] 6.44385098 ack+read e66) v4 -- ?+0
> > 0x7fc57c01cdc0 con 0x20dba80
> > 2014-07-21 15:01:13.348006 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.0 10.30.3.178:6800/3700 99 ==== osd_op_reply(450
> > testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
> > [call,getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory))
> v6
> > ==== 309+0+0 (375136675 0 0) 0x7fc5f4005b90 con 0x20dba80
> > 2014-07-21 15:01:13.348299 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.3.178:6800/3700 -- osd_op(client.7160.0:451 testfolder [call
> > version.read,getxattrs,stat] 6.62cce9f7 ack+read e66) v4 -- ?+0
> > 0x7fc57c01cc10 con 0x20dba80
> > 2014-07-21 15:01:13.349174 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.0 10.30.3.178:6800/3700 100 ==== osd_op_reply(451 testfolder
> > [call,getxattrs,stat] v0'0 uv1 ondisk = 0) v6 ==== 261+0+139 (3119832768
> 0
> > 2317765080) 0x7fc5f4005a00 con 0x20dba80
> > 2014-07-21 15:01:13.349324 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.3.178:6800/3700 -- osd_op(client.7160.0:452 testfolder [call
> > version.check_conds,call version.read,read 0~524288] 6.62cce9f7 ack+read
> > e66) v4 -- ?+0 0x7fc57c01cc10 con 0x20dba80
> > 2014-07-21 15:01:13.350009 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.0 10.30.3.178:6800/3700 101 ==== osd_op_reply(452 testfolder
> > [call,call,read 0~140] v0'0 uv1 ondisk = 0) v6 ==== 261+0+188
> (1382517052 0
> > 1901701781) 0x7fc5f4000fd0 con 0x20dba80
> > 2014-07-21 15:01:13.350122 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.3.178:6800/3700 -- osd_op(client.7160.0:453
> > .bucket.meta.testfolder:us-west.20011.1 [call
> version.read,getxattrs,stat]
> > 6.1851d0ad ack+read e66) v4 -- ?+0 0x7fc57c01d780 con 0x20dba80
> > 2014-07-21 15:01:13.350914 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.0 10.30.3.178:6800/3700 102 ==== osd_op_reply(453
> > .bucket.meta.testfolder:us-west.20011.1 [call,getxattrs,stat] v0'0 uv1
> > ondisk = 0) v6 ==== 290+0+344 (1757888169 0 2994068559) 0x7fc5f4000fd0
> con
> > 0x20dba80
> > 2014-07-21 15:01:13.351131 7fc5deffd700  0 WARNING: couldn't find acl
> header
> > for bucket, generating default
> > 2014-07-21 15:01:13.351177 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.0.22:6800/12749 -- osd_op(client.7160.0:454 admin [getxattrs,stat]
> > 8.8cee537f ack+read e66) v4 -- ?+0 0x7fc57c023a10 con 0x20e4010
> > 2014-07-21 15:01:13.352755 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.1 10.30.0.22:6800/12749 150 ==== osd_op_reply(454 admin
> [getxattrs,stat]
> > v0'0 uv1 ondisk = 0) v6 ==== 214+0+91 (3932713703 0 605478480)
> > 0x7fc5fc001130 con 0x20e4010
> > 2014-07-21 15:01:13.352843 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.0.22:6800/12749 -- osd_op(client.7160.0:455 admin [read 0~524288]
> > 8.8cee537f ack+read e66) v4 -- ?+0 0x7fc57c023810 con 0x20e4010
> > 2014-07-21 15:01:13.353679 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.1 10.30.0.22:6800/12749 151 ==== osd_op_reply(455 admin [read 0~313]
> > v0'0 uv1 ondisk = 0) v6 ==== 172+0+313 (855218883 0 3348830508)
> > 0x7fc5fc001130 con 0x20e4010
> > 2014-07-21 15:01:13.354106 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.0.23:6800/28001 -- osd_op(client.7160.0:456
> statelog.obj_opstate.57
> > [call statelog.add] 10.bb49d85f ondisk+write e66) v4 -- ?+0
> 0x7fc57c02b090
> > con 0x20e0a70
> > 2014-07-21 15:01:13.363690 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.2 10.30.0.23:6800/28001 103 ==== osd_op_reply(456
> > statelog.obj_opstate.57 [call] v66'47 uv47 ondisk = 0) v6 ==== 190+0+0
> > (4198807369 0 0) 0x7fc604005300 con 0x20e0a70
> > 2014-07-21 15:01:13.363928 7fc5deffd700  0 > HTTP_DATE -> Mon Jul 21
> > 20:01:13 2014
> > 2014-07-21 15:01:13.363947 7fc5deffd700  0 > HTTP_X_AMZ_COPY_SOURCE ->
> > testfolder%2FArcherC7v1_en_3_13_34_up_boot%28140402%29.bin
> > 2014-07-21 15:01:13.520133 7fc5deffd700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.0.23:6800/28001 -- osd_op(client.7160.0:457
> statelog.obj_opstate.57
> > [call statelog.add] 10.bb49d85f ondisk+write e66) v4 -- ?+0
> 0x7fc57c023870
> > con 0x20e0a70
> > 2014-07-21 15:01:13.524531 7fc62fa63700  1 -- 10.30.3.178:0/1028990 <==
> > osd.2 10.30.0.23:6800/28001 104 ==== osd_op_reply(457
> > statelog.obj_opstate.57 [call] v66'48 uv48 ondisk = 0) v6 ==== 190+0+0
> > (518743807 0 0) 0x7fc6040072d0 con 0x20e0a70
> > 2014-07-21 15:01:13.524723 7fc5deffd700  1 ====== req done
> > req=0x7fc5e000fcf0 http_status=403 ======
>
> Did you set the system flag on the sync agent user?
>
> Yehuda
>
>
> > 2014-07-21 15:01:13.673430 7fc62d95e700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.0.24:6800/15997 -- ping v1 -- ?+0 0x7fc6000037e0 con 0x20df800
> > 2014-07-21 15:01:13.673499 7fc62d95e700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.3.178:6800/3700 -- ping v1 -- ?+0 0x7fc60000a340 con 0x20dba80
> > 2014-07-21 15:01:13.673523 7fc62d95e700  1 -- 10.30.3.178:0/1028990 -->
> > 10.30.0.22:6800/12749 -- ping v1 -- ?+0 0x7fc60000abe0 con 0x20e4010
> >
> >
> > It appears as far as I can tell that the file never makes it to the
> remote
> > end, and this goes for all files I could find.
> >
> > Any ideas on what else to look at?
> >
> > Thanks!
> >
> > Justice
> >
>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users at lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140722/d1ec0428/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux