Re: Openstack keystone with Radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well that certainly looks ok. So entries in [client.radosgw.gateway] *should* work. If they are not then that points to something else not setup right on the ceph or radosgw side.

What version of ceph is this?

I'd do the following:
- check all ceph hosts have the same ceph version running
- restart all the hosts (ahem - assuming this is not a prod setup)

If you have not done so before, check the gateway works with all the keystone stuff disabled (i.e create a swift user using radosgw-admin and check you can upload a file etc as that user). *Then* enable the keystone bits...restart the gateway and try again.

There are a lot of fiddly bits involved in the setup of radosgw - and it is real easy to to have one missed or not done correctly, which trips you up later!

Regards

Mark

On 14/10/14 05:06, lakshmi k s wrote:

ceph auth list on gateway node has the following. I think I am using the
correct name in ceph.conf.

gateway@gateway:~$ ceph auth list
installed auth entries:
client.admin
         key: AQBL3SxUiMplMxAAjrL6oT+0Q5JtdrD90toXqg==
         caps: [mds] allow
         caps: [mon] allow *
         caps: [osd] allow *
client.radosgw.gateway
         key: AQCI5C1UUH7iOhAAWazAeqVLetIDh+CptBtRrQ==
         caps: [mon] allow rwx
         caps: [osd] allow rwx




On Sunday, October 12, 2014 8:02 PM, Mark Kirkwood
<mark.kirkwood@xxxxxxxxxxxxxxx> wrote:


Ah, yes. So your gateway is called something other than:

[client.radosgw.gateway]

So take a look at what

$ ceph auth list

says (run from your rgw), it should pick up the correct name. Then
correct your ceph.conf, restart and see what the rgw log looks like as
you edge ever so closer to having it work :-)

regards

Mark

On 13/10/14 12:27, lakshmi k s wrote:
 > Yes Mark, I did restart all the services - radosgw, ceph, apache2. And
 > yes, it never attempted to use keystone right from the beginning.
 > Interestingly, when I moved the rgw keystone url =
 > http://192.0.2.21:5000
<http://192.0.2.21:5000/><http://192.0.2.21:5000/> under global section in
 > ceph.conf file, I see 500 internal error on both the nodes and following
 > logs were captured. This looks similar to yours at least during initial
 > handshake.
 >
 > 2014-10-12 16:08:21.015597 7fca80fa9700  1 ====== starting new request
 > req=0x7fcac002ae10 =====
 > 2014-10-12 16:08:21.015621 7fca80fa9700  2 req 3:0.000026::GET
 > /swift/v1::initializing
 > 2014-10-12 16:08:21.015665 7fca80fa9700 10 ver=v1 first= req=
 > 2014-10-12 16:08:21.015669 7fca80fa9700 10 s->object=<NULL>
s->bucket=<NULL>
 > 2014-10-12 16:08:21.015676 7fca80fa9700  2 req 3:0.000081:swift:GET
 > /swift/v1::getting op
 > 2014-10-12 16:08:21.015682 7fca80fa9700  2 req 3:0.000087:swift:GET
 > /swift/v1:list_buckets:authorizing
 > 2014-10-12 16:08:21.015688 7fca80fa9700 20
 > token_id=7bfb869419044bec8c258e75830d55a2
 > 2014-10-12 16:08:21.015742 7fca80fa9700 20 sending request to
 > http://192.0.2.21:5000/v2.0/tokens
 > 2014-10-12 16:08:33.001640 7fca9d7e2700  0 Keystone token parse error:
 > malformed json
 > 2014-10-12 16:08:33.002756 7fca9d7e2700 10 failed to authorize request
 > 2014-10-12 16:08:33.003598 7fca9d7e2700  2 req 1:75.081031:swift:GET
 > /swift/v1:list_buckets:http status=401
 > 2014-10-12 16:08:33.003863 7fca9d7e2700  1 ====== req done
 > req=0x7fcac0010670 http_status=401 ======
 > 2014-10-12 16:08:33.004414 7fca9d7e2700 20 process_request() returned -1
 >



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux