Re: Openstack keystone with Radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Mark - with rgw_keystone_url under radosgw section, I do NOT see keystone handshake. If I move it under global section, I see initial keystone handshake as explained earlier. Below is the output of osd dump and osd tree. I have 3 nodes (node1, node2, node3) acting as OSDs. One of them (node1) is also a monitor node. I also have an admin node and gateway node in ceph cluster. Keystone server (swift client) of course is all together a different Openstack setup. Let me know if you need any more information. 

ceph-admin@ceph-admin:~/ceph-cluster$ ceph osd dump
epoch 34
fsid 199b0c6f-91c1-4ada-907c-4105c6118b40
created 2014-10-13 18:10:28.987081
modified 2014-10-13 18:55:33.028829
flags
pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 1 'metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 2 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 3 '.rgw.root' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 15 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 4 '.rgw.control' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 17 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 5 '.rgw' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 19 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 6 '.rgw.gc' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 20 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 7 '.users.uid' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 21 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 8 '.rgw.buckets' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 23 flags hashpspool stripe_width 0
pool 9 '.rgw.buckets.index' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 25 flags hashpspool stripe_width 0
pool 10 '.users.swift' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 29 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 11 '.users.email' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 31 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 12 '.users' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 33 owner 18446744073709551615 flags hashpspool stripe_width 0
max_osd 3
osd.0 up   in  weight 1 up_from 4 up_thru 33 down_at 0 last_clean_interval [0,0) 192.0.2.211:6800/4163 192.0.2.211:6801/4163 192.0.2.211:6802/4163 192.0.2.211:6803/4163 exists,up 74bbdb5d-8f03-4ed5-8d33-33b710a597d1
osd.1 up   in  weight 1 up_from 7 up_thru 33 down_at 0 last_clean_interval [0,0) 192.0.2.212:6800/3070 192.0.2.212:6801/3070 192.0.2.212:6802/3070 192.0.2.212:6803/3070 exists,up 6ec0bea2-bba2-4d6a-b1a3-c5d7caf1c801
osd.2 up   in  weight 1 up_from 10 up_thru 33 down_at 0 last_clean_interval [0,0) 192.0.2.213:6800/3070 192.0.2.213:6801/3070 192.0.2.213:6802/3070 192.0.2.213:6803/3070 exists,up bb464cc6-328f-4fb9-86a7-2256c50b97a1

ceph-admin@ceph-admin:~/ceph-cluster$ ceph osd tree
# id    weight  type name       up/down reweight
-1      0.05997 root default
-2      0.01999         host node1
0       0.01999                 osd.0   up      1
-3      0.01999         host node2
1       0.01999                 osd.1   up      1
-4      0.01999         host node3
2       0.01999                 osd.2   up      1






On Monday, October 13, 2014 9:52 PM, Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx> wrote:


Was that with you moving just rgw_keystone_url into [global]? If so then
yeah, that won't work as it will be missing your auth token etc (so will
fail to authorize always). You need to chase up why it is not seeing
some/all settings in the [client.radosgw.gateway] section.

I have a suspicion that you have an unusual ceph topology - so it might
be beneficial to show us:

$ ceph mon dump
$ ceph osd tree

and also mention which additional hosts are admins and which host is
your radosgw.

Cheers

Mark

On 14/10/14 15:32, lakshmi k s wrote:
> I did restart the ceph cluster only to see the ceph health to be NOT OK.
> I did the purge operation and re-installed ceph packages on all nodes.
> This time, ceph admin node has  0.80.6 and all other cluster nodes
> including Openstack client node have 0.80.5 version. Same error logs
> like before -
> 2014-10-13 19:21:40.726717 7f88907c8700  1 ====== starting new request
> req=0x7f88c003a0e0 =====
> 2014-10-13 19:21:40.726731 7f88907c8700  2 req 2:0.000014::HEAD
> /swift/v1::initializing
> 2014-10-13 19:21:40.726755 7f88907c8700 10 ver=v1 first= req=
> 2014-10-13 19:21:40.726757 7f88907c8700 10 s->object=<NULL> s->bucket=<NULL>
> 2014-10-13 19:21:40.726761 7f88907c8700  2 req 2:0.000045:swift:HEAD
> /swift/v1::getting op
> 2014-10-13 19:21:40.726764 7f88907c8700  2 req 2:0.000048:swift:HEAD
> /swift/v1:stat_account:authorizing
> 2014-10-13 19:21:40.726768 7f88907c8700 20
> token_id=02891ee2909b4f24b999038d93cbc982
> 2014-10-13 19:21:40.726803 7f88907c8700 20 sending request to
> http://192.0.2.21:35357/v2.0/tokens
> 2014-10-13 19:21:55.340373 7f88bbfff700  2
> RGWDataChangesLog::ChangesRenewThread: start
> 2014-10-13 19:22:17.340566 7f88bbfff700  2
> RGWDataChangesLog::ChangesRenewThread: start
> 2014-10-13 19:22:24.786164 7f88937ce700  0 Keystone token parse error:
> malformed json
> 2014-10-13 19:22:24.787409 7f88937ce700 10 failed to authorize request
> 2014-10-13 19:22:24.788450 7f88937ce700  2 req 1:75.099222:swift:HEAD
> /swift/v1:stat_account:http status=401
> 2014-10-13 19:22:24.789128 7f88937ce700  1 ====== req done
> req=0x7f88c00068e0 http_status=401 ======
> 2014-10-13 19:22:24.789551 7f88937ce700 20 process_request() returned -1
>
> gateway@gateway:~$ ceph auth list
> installed auth entries:
> osd.0
>          key: AQA2eDxU2Hi2BxAADn1H6LVbRuoL1GadYBQo3Q==
>          caps: [mon] allow profile osd
>          caps: [osd] allow *
> osd.1
>          key: AQBCeDxUCNw7HBAAmS80TPDupKEpbRMRTmmgdA==
>          caps: [mon] allow profile osd
>          caps: [osd] allow *
> osd.2
>          key: AQBMeDxUMBndOBAAnN0Ty2h3MDROlcKMYRYaWQ==
>          caps: [mon] allow profile osd
>          caps: [osd] allow *
> client.admin
>          key: AQAFeDxUmJnTMRAADEIyXPDkOz8lHsOq9blAdA==
>          caps: [mds] allow
>          caps: [mon] allow *
>          caps: [osd] allow *
> client.bootstrap-mds
>          key: AQAGeDxUqARlERAAVNwTwY9tOOa0q0asJWy/AA==
>          caps: [mon] allow profile bootstrap-mds
> client.bootstrap-osd
>          key: AQAGeDxUGCFEBRAAUbV+vyvU5AqN1CHI7wfoDA==
>          caps: [mon] allow profile bootstrap-osd
> client.radosgw.gateway
>          key: AQCTejxUIHFbHRAApwnvxy4bCIOZ7esn95d5tA==
>          caps: [mon] allow rwx
>        caps: [osd] allow rwx
>
>
>
> Appreciate your time.
> Thanks,
> Lakshmi.
>
>
> On Monday, October 13, 2014 4:43 PM, Mark Kirkwood
> <mark.kirkwood@xxxxxxxxxxxxxxx> wrote:
>
>
> That's the same version that I'm using.
>
> Did you check the other points I mentioned:
> - check *all* ceph host are running the same version
> - restart 'em all to be sure
>
> I did think that your 'auth list' output looked strange, but I guessed
> that you have cut out the osd and mon info before placing it in the
> message...might be useful to see all of that too. Obviously something is
> not quite right.
>
> On 14/10/14 12:05, lakshmi k s wrote:
>  > I have Ceph 0.85 version. I can still talk to this gateway node like
>  > below using swift v1.0. Note that this user was created using
>  > radosgw-admin..
>  >
>  > swift -V 1.0 -A http://gateway.ex.com/auth/v1.0
> <http://gateway.ex.com/auth/v1.0>-U s3User:swiftUser -K
>  > CRV8PeotaW204nE9IyutoVTcnr+2Uw8M8DQuRP7i list
>  > my-Test
>  >
>  > I am at total loss now.
>  >
>  >
>  > On Monday, October 13, 2014 3:25 PM, Mark Kirkwood
>  > <mark.kirkwood@xxxxxxxxxxxxxxx
> <mailto:mark.kirkwood@xxxxxxxxxxxxxxx>> wrote:
>  >
>  >
>  > Well that certainly looks ok. So entries in [client.radosgw.gateway]
>  > *should* work. If they are not then that points to something else not
>  > setup right on the ceph or radosgw side.
>  >
>  > What version of ceph is this?
>  >
>  > I'd do the following:
>  > - check all ceph hosts have the same ceph version running
>  > - restart all the hosts (ahem - assuming this is not a prod setup)
>  >
>  > If you have not done so before, check the gateway works with all the
>  > keystone stuff disabled (i.e create a swift user using radosgw-admin and
>  > check you can upload a file etc as that user). *Then* enable the
>  > keystone bits...restart the gateway and try again.
>  >
>  > There are a lot of fiddly bits involved in the setup of radosgw - and it
>  > is real easy to to have one missed or not done correctly, which trips
>  > you up later!
>  >
>  > Regards
>  >
>  > Mark
>  >
>  > On 14/10/14 05:06, lakshmi k s wrote:
>  >  >
>  >  > ceph auth list on gateway node has the following. I think I am
> using the
>  >  > correct name in ceph.conf.
>  >  >
>  >  > gateway@gateway <mailto:gateway@gateway> <mailto:gateway@gateway
> <mailto:gateway@gateway>>:~$ ceph auth list
>  >  > installed auth entries:
>  >  > client.admin
>  >  >          key: AQBL3SxUiMplMxAAjrL6oT+0Q5JtdrD90toXqg==
>  >  >          caps: [mds] allow
>  >  >          caps: [mon] allow *
>  >  >          caps: [osd] allow *
>  >  > client.radosgw.gateway
>  >  >          key: AQCI5C1UUH7iOhAAWazAeqVLetIDh+CptBtRrQ==
>  >  >          caps: [mon] allow rwx
>  >  >          caps: [osd] allow rwx
>  >  >
>  >  >
>  >  >
>  >  >
>  >  > On Sunday, October 12, 2014 8:02 PM, Mark Kirkwood
>  >  > <mark.kirkwood@xxxxxxxxxxxxxxx <mailto:mark.kirkwood@xxxxxxxxxxxxxxx>

>  > <mailto:mark.kirkwood@xxxxxxxxxxxxxxx
> <mailto:mark.kirkwood@xxxxxxxxxxxxxxx>>> wrote:
>
>  >  >
>  >  >
>  >  > Ah, yes. So your gateway is called something other than:
>  >  >
>  >  > [client.radosgw.gateway]
>  >  >
>  >  > So take a look at what
>  >  >
>  >  > $ ceph auth list
>  >  >
>  >  > says (run from your rgw), it should pick up the correct name. Then
>  >  > correct your ceph.conf, restart and see what the rgw log looks like as
>  >  > you edge ever so closer to having it work :-)
>  >  >
>  >  > regards
>  >  >
>  >  > Mark
>  >  >
>  >  > On 13/10/14 12:27, lakshmi k s wrote:
>  >  >  > Yes Mark, I did restart all the services - radosgw, ceph,
> apache2. And
>  >  >  > yes, it never attempted to use keystone right from the beginning.
>  >  >  > Interestingly, when I moved the rgw keystone url ="" clear="none" class="" style="">>  >  >  > http://192.0.2.21:5000
> <http://192.0.2.21:5000/><http://192.0.2.21:5000/>
>  >  > <http://192.0.2.21:5000/><http://192.0.2.21:5000/> under global
>  > section in
>  >  >  > ceph.conf file, I see 500 internal error on both the nodes and
>  > following
>  >  >  > logs were captured. This looks similar to yours at least during
>  > initial
>  >  >  > handshake.
>  >  >  >
>  >  >  > 2014-10-12 16:08:21.015597 7fca80fa9700  1 ====== starting new
> request
>  >  >  > req=0x7fcac002ae10 =====
>  >  >  > 2014-10-12 16:08:21.015621 7fca80fa9700  2 req 3:0.000026::GET
>  >  >  > /swift/v1::initializing
>  >  >  > 2014-10-12 16:08:21.015665 7fca80fa9700 10 ver=v1 first= req=
>  >  >  > 2014-10-12 16:08:21.015669 7fca80fa9700 10 s->object=<NULL>
>  >  > s->bucket=<NULL>
>  >  >  > 2014-10-12 16:08:21.015676 7fca80fa9700  2 req 3:0.000081:swift:GET
>  >  >  > /swift/v1::getting op
>  >  >  > 2014-10-12 16:08:21.015682 7fca80fa9700  2 req 3:0.000087:swift:GET
>  >  >  > /swift/v1:list_buckets:authorizing
>  >  >  > 2014-10-12 16:08:21.015688 7fca80fa9700 20
>  >  >  > token_id=7bfb869419044bec8c258e75830d55a2
>  >  >  > 2014-10-12 16:08:21.015742 7fca80fa9700 20 sending request to
>  >  >  > http://192.0.2.21:5000/v2.0/tokens
>  >  >  > 2014-10-12 16:08:33.001640 7fca9d7e2700  0 Keystone token parse
> error:
>  >  >  > malformed json
>  >  >  > 2014-10-12 16:08:33.002756 7fca9d7e2700 10 failed to authorize
> request
>  >  >  > 2014-10-12 16:08:33.003598 7fca9d7e2700  2 req
> 1:75.081031:swift:GET
>  >  >  > /swift/v1:list_buckets:http status=401
>  >  >  > 2014-10-12 16:08:33.003863 7fca9d7e2700  1 ====== req done
>  >  >  > req=0x7fcac0010670 http_status=401 ======
>  >  >  > 2014-10-12 16:08:33.004414 7fca9d7e2700 20 process_request()
>  > returned -1
>  >  >  >
>  >  >
>  >  >
>  >
>  >
>  >
>
>
>



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux