Re: Openstack keystone with Radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Mark for looking into this further. As I mentioned earlier, I have following nodes in my ceph cluster - 

1 admin node
3 OSD (One of them is a monitor too)
1 gateway node

This should have worked technically. But I am not sure where I am going wrong. I will continue to look into this and keep you all posted.

Thanks,
Lakshmi.


On Wednesday, October 15, 2014 2:00 AM, Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx> wrote:


Because this is an interesting problem, I added an additional host to my
4 node ceph setup that is a purely radosgw host. So I have
- ceph1 (mon + osd)
- ceph2-4 (osd)
- ceph5 (radosgw)

My ceph.conf on ceph5 included below. Obviously I changed my keystone
endpoints to use this host (ceph5). After that I am unable to reproduce
your problem - for a moment I thought I had, but it was just that I had
forgotten to include the keystone config in there at all! So it is now
working fine. My guess is that there is something subtle broken in your
config that we have yet to see...

(ceph5) $ cat /etc/ceph/ceph.conf

[global]
fsid = 2ea9a745-d84c-4fc5-95b4-2f6afa98ece1
mon_initial_members = ceph1
mon_host = 192.168.122.21
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
osd_pg_bits = 7
osd_pgp_bits = 7
osd_journal_size = 2048

[client.radosgw.gateway]
host = ceph5
keyring = /etc/ceph/ceph.rados.gateway.keyring
rgw_socket_path = /var/run/ceph/$name.sock
log_file = /var/log/ceph/radosgw.log
rgw_data = /var/lib/ceph/radosgw/$cluster-$id
rgw_dns_name = ceph5
rgw print continue = false
debug rgw = 20
rgw keystone url = "" shape="rect" href="http://stack1:35357/" target="_blank">http://stack1:35357
rgw keystone admin token = tokentoken
rgw keystone accepted roles = admin Member _member_
rgw keystone token cache size = 500
rgw keystone revocation interval = 500
rgw s3 auth use keystone = true
nss db path = /var/ceph/nss/

On 15/10/14 10:25, Mark Kirkwood wrote:
> Right,
>
> So you have 3 osds, one of whom is a mon. Your rgw is on another host
> (called gateway it seems). I'm wondering if is this the issue. In my
> case I'm using one of my osds as a rgw as well. This *should* not
> matter... but it might be worth trying out a rgw on one of your osds
> instead. I'm thinking that your gateway host is setup in some way that
> is confusing the [client.radosgw.gatway] entry in ceph.conf (e.g
> hostname resolution).
>
> Regards
>
> Mark
>
> On 15/10/14 05:40, lakshmi k s wrote:
>> Hello Mark - with rgw_keystone_url under radosgw section, I do NOT see
>> keystone handshake. If I move it under global section, I see initial
>> keystone handshake as explained earlier. Below is the output of osd dump
>> and osd tree. I have 3 nodes (node1, node2, node3) acting as OSDs. One
>> of them (node1) is also a monitor node. I also have an admin node and
>> gateway node in ceph cluster. Keystone server (swift client) of course
>> is all together a different Openstack setup. Let me know if you need any
>> more information.
>>
>



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux