Was that with you moving just rgw_keystone_url into [global]? If so then
yeah, that won't work as it will be missing your auth token etc (so will
fail to authorize always). You need to chase up why it is not seeing
some/all settings in the [client.radosgw.gateway] section.
I have a suspicion that you have an unusual ceph topology - so it might
be beneficial to show us:
$ ceph mon dump
$ ceph osd tree
and also mention which additional hosts are admins and which host is
your radosgw.
Cheers
Mark
On 14/10/14 15:32, lakshmi k s wrote:
I did restart the ceph cluster only to see the ceph health to be NOT OK.
I did the purge operation and re-installed ceph packages on all nodes.
This time, ceph admin node has 0.80.6 and all other cluster nodes
including Openstack client node have 0.80.5 version. Same error logs
like before -
2014-10-13 19:21:40.726717 7f88907c8700 1 ====== starting new request
req=0x7f88c003a0e0 =====
2014-10-13 19:21:40.726731 7f88907c8700 2 req 2:0.000014::HEAD
/swift/v1::initializing
2014-10-13 19:21:40.726755 7f88907c8700 10 ver=v1 first= req=
2014-10-13 19:21:40.726757 7f88907c8700 10 s->object=<NULL> s->bucket=<NULL>
2014-10-13 19:21:40.726761 7f88907c8700 2 req 2:0.000045:swift:HEAD
/swift/v1::getting op
2014-10-13 19:21:40.726764 7f88907c8700 2 req 2:0.000048:swift:HEAD
/swift/v1:stat_account:authorizing
2014-10-13 19:21:40.726768 7f88907c8700 20
token_id=02891ee2909b4f24b999038d93cbc982
2014-10-13 19:21:40.726803 7f88907c8700 20 sending request to
http://192.0.2.21:35357/v2.0/tokens
2014-10-13 19:21:55.340373 7f88bbfff700 2
RGWDataChangesLog::ChangesRenewThread: start
2014-10-13 19:22:17.340566 7f88bbfff700 2
RGWDataChangesLog::ChangesRenewThread: start
2014-10-13 19:22:24.786164 7f88937ce700 0 Keystone token parse error:
malformed json
2014-10-13 19:22:24.787409 7f88937ce700 10 failed to authorize request
2014-10-13 19:22:24.788450 7f88937ce700 2 req 1:75.099222:swift:HEAD
/swift/v1:stat_account:http status=401
2014-10-13 19:22:24.789128 7f88937ce700 1 ====== req done
req=0x7f88c00068e0 http_status=401 ======
2014-10-13 19:22:24.789551 7f88937ce700 20 process_request() returned -1
gateway@gateway:~$ ceph auth list
installed auth entries:
osd.0
key: AQA2eDxU2Hi2BxAADn1H6LVbRuoL1GadYBQo3Q==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQBCeDxUCNw7HBAAmS80TPDupKEpbRMRTmmgdA==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.2
key: AQBMeDxUMBndOBAAnN0Ty2h3MDROlcKMYRYaWQ==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQAFeDxUmJnTMRAADEIyXPDkOz8lHsOq9blAdA==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQAGeDxUqARlERAAVNwTwY9tOOa0q0asJWy/AA==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
key: AQAGeDxUGCFEBRAAUbV+vyvU5AqN1CHI7wfoDA==
caps: [mon] allow profile bootstrap-osd
client.radosgw.gateway
key: AQCTejxUIHFbHRAApwnvxy4bCIOZ7esn95d5tA==
caps: [mon] allow rwx
caps: [osd] allow rwx
Appreciate your time.
Thanks,
Lakshmi.
On Monday, October 13, 2014 4:43 PM, Mark Kirkwood
<mark.kirkwood@xxxxxxxxxxxxxxx> wrote:
That's the same version that I'm using.
Did you check the other points I mentioned:
- check *all* ceph host are running the same version
- restart 'em all to be sure
I did think that your 'auth list' output looked strange, but I guessed
that you have cut out the osd and mon info before placing it in the
message...might be useful to see all of that too. Obviously something is
not quite right.
On 14/10/14 12:05, lakshmi k s wrote:
> I have Ceph 0.85 version. I can still talk to this gateway node like
> below using swift v1.0. Note that this user was created using
> radosgw-admin..
>
> swift -V 1.0 -A http://gateway.ex.com/auth/v1.0
<http://gateway.ex.com/auth/v1.0>-U s3User:swiftUser -K
> CRV8PeotaW204nE9IyutoVTcnr+2Uw8M8DQuRP7i list
> my-Test
>
> I am at total loss now.
>
>
> On Monday, October 13, 2014 3:25 PM, Mark Kirkwood
> <mark.kirkwood@xxxxxxxxxxxxxxx
<mailto:mark.kirkwood@xxxxxxxxxxxxxxx>> wrote:
>
>
> Well that certainly looks ok. So entries in [client.radosgw.gateway]
> *should* work. If they are not then that points to something else not
> setup right on the ceph or radosgw side.
>
> What version of ceph is this?
>
> I'd do the following:
> - check all ceph hosts have the same ceph version running
> - restart all the hosts (ahem - assuming this is not a prod setup)
>
> If you have not done so before, check the gateway works with all the
> keystone stuff disabled (i.e create a swift user using radosgw-admin and
> check you can upload a file etc as that user). *Then* enable the
> keystone bits...restart the gateway and try again.
>
> There are a lot of fiddly bits involved in the setup of radosgw - and it
> is real easy to to have one missed or not done correctly, which trips
> you up later!
>
> Regards
>
> Mark
>
> On 14/10/14 05:06, lakshmi k s wrote:
> >
> > ceph auth list on gateway node has the following. I think I am
using the
> > correct name in ceph.conf.
> >
> > gateway@gateway <mailto:gateway@gateway> <mailto:gateway@gateway
<mailto:gateway@gateway>>:~$ ceph auth list
> > installed auth entries:
> > client.admin
> > key: AQBL3SxUiMplMxAAjrL6oT+0Q5JtdrD90toXqg==
> > caps: [mds] allow
> > caps: [mon] allow *
> > caps: [osd] allow *
> > client.radosgw.gateway
> > key: AQCI5C1UUH7iOhAAWazAeqVLetIDh+CptBtRrQ==
> > caps: [mon] allow rwx
> > caps: [osd] allow rwx
> >
> >
> >
> >
> > On Sunday, October 12, 2014 8:02 PM, Mark Kirkwood
> > <mark.kirkwood@xxxxxxxxxxxxxxx <mailto:mark.kirkwood@xxxxxxxxxxxxxxx>
> <mailto:mark.kirkwood@xxxxxxxxxxxxxxx
<mailto:mark.kirkwood@xxxxxxxxxxxxxxx>>> wrote:
> >
> >
> > Ah, yes. So your gateway is called something other than:
> >
> > [client.radosgw.gateway]
> >
> > So take a look at what
> >
> > $ ceph auth list
> >
> > says (run from your rgw), it should pick up the correct name. Then
> > correct your ceph.conf, restart and see what the rgw log looks like as
> > you edge ever so closer to having it work :-)
> >
> > regards
> >
> > Mark
> >
> > On 13/10/14 12:27, lakshmi k s wrote:
> > > Yes Mark, I did restart all the services - radosgw, ceph,
apache2. And
> > > yes, it never attempted to use keystone right from the beginning.
> > > Interestingly, when I moved the rgw keystone url =
> > > http://192.0.2.21:5000
<http://192.0.2.21:5000/><http://192.0.2.21:5000/>
> > <http://192.0.2.21:5000/><http://192.0.2.21:5000/> under global
> section in
> > > ceph.conf file, I see 500 internal error on both the nodes and
> following
> > > logs were captured. This looks similar to yours at least during
> initial
> > > handshake.
> > >
> > > 2014-10-12 16:08:21.015597 7fca80fa9700 1 ====== starting new
request
> > > req=0x7fcac002ae10 =====
> > > 2014-10-12 16:08:21.015621 7fca80fa9700 2 req 3:0.000026::GET
> > > /swift/v1::initializing
> > > 2014-10-12 16:08:21.015665 7fca80fa9700 10 ver=v1 first= req=
> > > 2014-10-12 16:08:21.015669 7fca80fa9700 10 s->object=<NULL>
> > s->bucket=<NULL>
> > > 2014-10-12 16:08:21.015676 7fca80fa9700 2 req 3:0.000081:swift:GET
> > > /swift/v1::getting op
> > > 2014-10-12 16:08:21.015682 7fca80fa9700 2 req 3:0.000087:swift:GET
> > > /swift/v1:list_buckets:authorizing
> > > 2014-10-12 16:08:21.015688 7fca80fa9700 20
> > > token_id=7bfb869419044bec8c258e75830d55a2
> > > 2014-10-12 16:08:21.015742 7fca80fa9700 20 sending request to
> > > http://192.0.2.21:5000/v2.0/tokens
> > > 2014-10-12 16:08:33.001640 7fca9d7e2700 0 Keystone token parse
error:
> > > malformed json
> > > 2014-10-12 16:08:33.002756 7fca9d7e2700 10 failed to authorize
request
> > > 2014-10-12 16:08:33.003598 7fca9d7e2700 2 req
1:75.081031:swift:GET
> > > /swift/v1:list_buckets:http status=401
> > > 2014-10-12 16:08:33.003863 7fca9d7e2700 1 ====== req done
> > > req=0x7fcac0010670 http_status=401 ======
> > > 2014-10-12 16:08:33.004414 7fca9d7e2700 20 process_request()
> returned -1
> > >
> >
> >
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com