Re: Radosgw auth and 0.65-1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just an FYI...the syntax requirements for this step must have changed a bit recently, after trying every single combination of options, I finally got it to work using this:

ceph auth add client.radosgw.gateway /etc/ceph/keyring.radosgw.gateway /etc/ceph/ceph.client.admin.keyring

Thanks,
Shain


Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

________________________________________
From: Gregory Farnum [greg@xxxxxxxxxxx]
Sent: Tuesday, July 02, 2013 12:16 PM
To: Shain Miley
Cc: ceph-users@xxxxxxxx
Subject: Re:  Radosgw auth and 0.65-1

You're specifying an explicit keyeing when trying to add the new RGW key, but letting the system pick one when running ceph -s. it's probably finding and using a different one to do that; you should check your key rings and their contents. :)
-Greg

On Tuesday, July 2, 2013, Shain Miley wrote:
Hello,

I am using Ubuntu 13.04 with the testing branch (0.65-1) of ceph.

I have had no problem at all setting up a 3 node cluster and I currently have several vm's booting off of rbd as a test.

Last night I decided to try and setup radosgw along with it, using the following docs:

https://github.com/ceph/ceph/blob/master/doc/radosgw/config.rst

Everything seems to work fine until I reach the following step:

root@ceph1:/etc/ceph# ceph -k /etc/ceph/ceph.keyring auth add client.radosgw.gateway -i /etc/ceph/keyring.radosgw.gateway
2013-07-02 11:49:22.077343 7f3fbaf5c700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2013-07-02 11:49:22.077343 7f3fbaf5c700  0 librados: client.admin initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound



root@ceph1:/etc/ceph# ceph auth list
installed auth entries:

osd.0
        key: AQBx4NFRQNTHABAAsz0v2oM8a0T1H5aDL5LfCA==
        caps: [mon] allow rwx
        caps: [osd] allow *
osd.1
        key: AQB44NFRwPfMNBAAylY3lHVw53XkGyorMKuhAg==
        caps: [mon] allow rwx
        caps: [osd] allow *
osd.2
        key: AQCA4NFR8AK3ChAAI96tg6E4OY8rV0jHrOiw8Q==
        caps: [mon] allow rwx
        caps: [osd] allow *
client.admin
        key: AQA33tFRYBMwORAAOFvOvMJakZk1FGbXspSJwA==
        caps: [mds] allow
        caps: [mon] allow *
        caps: [osd] allow *
client.bootstrap-mds
        key: AQA43tFR2IIBFxAAbLri8LaSunGw6LX22CODxw==
        caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
        key: AQA43tFRgNk8ChAA3uMc0AJsBWuQgoGVYYBtcA==
        caps: [mon] allow profile bootstrap-osd




root@ceph1:/etc/ceph# ls
ceph.client.admin.keyring  ceph.conf  keyring.radosgw.gateway

root@ceph1:/etc/ceph# cat keyring.radosgw.gateway
[client.radosgw.gateway]
        key = AQCR79JRwO77IhAAUEs8J3OgbzNlk/lJPX8mHg==
        caps mon = "allow r"
        caps osd = "allow rwx"



I have no problem communicating with the cluster as whole:

root@ceph1:/etc/ceph# ceph -s
  cluster 299615fb-765a-4225-ab83-97719549eb4d
   health HEALTH_OK
   monmap e1: 3 mons at {ceph1=172.31.2.101:6789/0,ceph2=172.31.2.102:6789/0,ceph3=172.31.2.103:6789/0<http://172.31.2.101:6789/0,ceph2=172.31.2.102:6789/0,ceph3=172.31.2.103:6789/0>}, election epoch 8, quorum 0,1,2 ceph1,ceph2,ceph3
   osdmap e25: 3 osds: 3 up, 3 in
    pgmap v880: 390 pgs: 390 active+clean; 9336 MB data, 18632 MB used, 362 GB / 380 GB avail
   mdsmap e1: 0/0/1 up


Is there any issue with me trying to test radosgw on a node that is already being used as a mon and an osd?

I may end up dropping back to the stable release...however I wanted to send a quick email in case there was an easy fix...or this was a bug of some sort.

Thanks in advance,

Shain


Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux