Integration of Ceph Object Gateway(radosgw) with OpenStack Juno Keystone

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am trying to integrate OpenStack Juno Keystone with the Ceph Object Gateway(radosw).

I want to use keystone as the users authority. A user that keystone authorizes to access
the gateway will also be created on the radosgw. Tokens that keystone validates will be
considered as valid by the rados gateway.


I have deployed a 4 node ceph cluster running on Ubuntu 14.04

Host1: ppm-c240-admin.xyz.com (10.x.x.123)

Host2: ppm-c240-ceph1.xyz.com (10.x.x.124)

Host3: ppm-c240-ceph2.xyz.com (10.x.x.125)

Host4: ppm-c240-ceph3.xyz.com (10.x.x.126)


My ceph -w output is as follows,

ceph@ppm-c240-admin:~/my-cluster$ ceph -w

cluster df18a088-2a70-43f9-b07f-ce8cf7c3349c
health HEALTH_OK
monmap e1: 3 mons at {ppm-c240-admin=10.x.x.123:6789/0,ppm-c240-ceph1=10.x.x.124:6789/0,ppm-c240-ceph2=10.x.x.125:6789/0},
            election epoch 20, quorum 0,1,2 ppm-c240-admin,ppm-c240-ceph1,ppm-c240-ceph2
osdmap e92: 12 osds: 12 up, 12 in
pgmap v461: 704 pgs, 4 pools, 0 bytes data, 0 objects
442 MB used, 44622 GB / 44623 GB avail
704 active+clean
2014-11-04 12:24:37.126783 mon.0 [INF] pgmap v461: 704 pgs: 704 active+clean; 0 bytes data, 442 MB used, 44622 GB / 44623 GB avail

The host ppm-c240-ceph3.xyz.com (10.x.x.126) is running the Ceph Object Gateway(Radosgw)

I can see the radosgw up and running,

ppmuser@ppm-c240-ceph3:~$ ll /var/run/ceph
total 0
drwxr-xr-x  3 root root 140 Dec  4 11:33 ./
drwxr-xr-x 21 root root 800 Dec  5 06:51 ../
srwxr-xr-x  1 root root   0 Nov  3 11:52 ceph-osd.10.asok=
srwxr-xr-x  1 root root   0 Nov  3 11:54 ceph-osd.11.asok=
srwxr-xr-x  1 root root   0 Nov  3 11:49 ceph-osd.9.asok=
srwxrwxrwx  1 root root   0 Nov  7 09:50 ceph.radosgw.gateway.fastcgi.sock=
drwxr-xr-x  2 root root  40 Apr 28  2014 radosgw-agent/
ppmuser@ppm-c240-ceph3:~$


root@ppm-c240-ceph3:~# ceph df detail
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED     OBJECTS
    44623G     44622G         496M             0          43
POOLS:
    NAME             ID     CATEGORY     USED     %USED     MAX AVAIL     OBJECTS     DIRTY     READ     WRITE
    data             0      -               0         0        14873G           0         0        0         0
    metadata         1      -               0         0        14873G           0         0        0         0
    rbd              2      -               0         0        14873G           0         0        0         0
    pg_pool          3      -               0         0        14873G           0         0        0         0
    .rgw.root        4      -             840         0        14873G           3         3     2356         3
    .rgw.control     5      -               0         0        14873G           8         8        0         0
    .rgw             6      -               0         0        14873G           0         0        0         0
    .rgw.gc          7      -               0         0        14873G          32        32     121k     83200
    .users.uid       8      -               0         0        14873G           0         0      862       220
    .users           9      -               0         0        14873G           0         0        0        84
    .users.email     10     -               0         0        14873G           0         0        0        42
    .users.swift     11     -               0         0        14873G           0         0        0         2
root@ppm-c240-ceph3:~#


I am following the steps in the URL: http://ceph.com/docs/master/radosgw/keystone/ for the radosgw integration with keystone.

The /etc/ceph/ceph.conf snippet on all the 4 ceph nodes are as follows,

rgw keystone url = "">rgw keystone admin token = xyz123
rgw keystone accepted roles = Member, admin
rgw keystone token cache size = 10000
rgw keystone revocation interval = 15 * 60
rgw s3 auth use keystone = true
nss db path = /var/lib/nssdb

Where 10.x.x.175 is the I.P. Address of the Keystone server (Single Node Juno install on Ubuntu 14.04)

Keystone itself is pointing to the Ceph Object Gateway as an object-storage endpoint:

ppmuser@ppm-dc-c3sv3-ju:~$ keystone service-create --name swift --type object-store --description "Object Storage"

ppmuser@ppm-dc-c3sv3-ju:~$ keystone endpoint-create --service-id a70fbbc539434fa5bf8c0977e36161a4  --publicurl http://ppm-c240-ceph3.xyz.com/swift/v1 --internalurl http://ppm-c240-ceph3.xyz.com/swift/v1 --adminurl http://ppm-c240-ceph3.xyz.com/swift/v1


I have the following queries

1) I am following the steps in http://docs.openstack.org/juno/install-guide/install/apt/content/swift install-controller-node.html. Do I need to create a swift user in keystone on my OpenStack node ?
  
2) Once the swift user is created, do I have to do a $ keystone user-role-add --user swift --tenant service --role admin ?

3) I did not find any documents on how to proceed further and test the integrated setup, so any pointers would be most welcome.

ps: I am cross posting this to the openstack and ceph lists because it involves an integration of both.

Regards,
---
Vivek Varghese Cherian
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux