Re: radosgw and Keystone

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stuart,

I'm also fiddling with radosgw and keystone and i am not an expert in any
way but i will try to help the best I can.

>From what I gathered from your logs apache and thus the radosgw are
accessible via the URL "http://bnedevcm.vrt.int/swift/v1";, and your keystone
server at "http://bnedevcm.vrt.int:35357";
>From my tests I have been able to successfully authenticate to the objects
store in two different ways :

a)
# swift -V 1.0 -A http://bnedevcm.vrt.int/auth -U user:subuser -K theKey
stat

This uses radosgw users created by the radosgw-admin

b)
# swift -V 2.0  -A http://bnedevcm.vrt.int:35357/v2.0 -U tenant:user -K
theKey stat

This uses keystone users and the service endpoint returned by keystone would
be http://bnedevcm.vrt.int/swift/v1, which seems correct in your logs.

Also when I've created an access key using radosgw-admin I've executed the
following command :

# radosgw-admin key create --subuser=johndoe:swift --key-type=swift
--gen-secret

The "--gen-secret" part is the important bit to actually get a key.

HIH,
Cheers,
Davide


-----Message d'origine-----
De : ceph-users-bounces@xxxxxxxxxxxxxx
[mailto:ceph-users-bounces@xxxxxxxxxxxxxx] De la part de Stuart Longland
Envoyé : lundi 27 mai 2013 03:56
À : ceph-users@xxxxxxxxxxxxxx
Objet :  radosgw and Keystone

Hi all,

I'm having a bit of fun and games getting the Rados Gateway going.  I
*think* I've got a working Ceph storage cluster.  I've tried following the
documentation, but how one tests that the gateway works is unclear.

My structure is as follows:
	- Two storage nodes, bnedevsn0 and bnedevsn1 with 2 OSDs
	  each (3TB spinners) run ceph-osd instances.
	- Two management nodes, bnedevmn0 and bnedevmn1 run ceph-mon,
ceph-mds, Keystone and other OpenStack management services.  The two
management nodes have a dr:bd block device for the MySQL database, and
listen on a virtual IP with hostname 'bnedevcm' ("development cluster
management")

The platform being used is Ubuntu 12.04 LTS AMD64 with Ceph release 0.61.

In order to establish a quorum, I've told it there's a bnedevmn2 which is
down.  It establishes a 2:3 quorum, and is happy.  I'm told the cluster is
in good health.  In production we will have 3 nodes.

I've tried both with and without Keystone, but something is clearly wrong in
my setup of the Rados Gateway.  I tried following the guide as best I
understood it, but still can't get it to work it seems.

In my ceph.conf, I have the following:
> [client.radosgw.gateway]
> host = bnedevmn0
> keyring = /etc/ceph/keyring.radosgw.gateway rgw socket path = 
> /tmp/radosgw.sock log file = /var/log/ceph/radosgw.log #rgw keystone 
> url = http://bnedevcm.vrt.int:35357 #rgw keystone admin token = 
> YFmhzg0Kt5RHvDGx #rgw keystone accepted roles = admin, Member, 
> swiftoperator #rgw keystone token cache size = 500 #rgw keystone 
> revocation interval = 600 rgw enable usage log = true nss db path = 
> /var/lib/ceph/nss rgw print continue = false

Shown above, are also the parameters I've tried for Keystone, commented out
here to take Keystone out of the equation.

If I try creating a user for Swift access... I can create the user itself:

> root@bnedevmn0:/var/lib/ceph/radosgw/ceph-radosgw.gateway# 
> radosgw-admin user create --uid=johndoe --display-name="John Doe" 
> --email=john@xxxxxxxxxxx
> 2013-05-27 11:40:09.330457 7f7fb5832700  0 -- :/1019040 >> 
> 10.87.168.252:6789/0 pipe(0x15d8400 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault {
"user_id": "johndoe",
>   "display_name": "John Doe",
>   "email": "john@xxxxxxxxxxx",
>   "suspended": 0,
>   "max_buckets": 1000,
>   "auid": 0,
>   "subusers": [],
>   "keys": [
>         { "user": "johndoe",
>           "access_key": "P4OJFBD7X99YZKEXVQV2",
>           "secret_key": "Qzn3PmL85wkQScoYJB1ksLF4DJi7owJNcy9Hvntv"}],
>   "swift_keys": [],
>   "caps": []}

So far so good, I create the subuser:
> root@bnedevmn0:/var/lib/ceph/radosgw/ceph-radosgw.gateway# 
> radosgw-admin subuser create --uid=johndoe --subuser=johndoe:swift 
> --access=full
> 2013-05-27 11:40:59.134460 7f1e092e4700  0 -- :/1019338 >> 
> 10.87.168.252:6789/0 pipe(0x2b22400 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault {
"user_id": "johndoe",
>   "display_name": "John Doe",
>   "email": "john@xxxxxxxxxxx",
>   "suspended": 0,
>   "max_buckets": 1000,
>   "auid": 0,
>   "subusers": [
>         { "id": "johndoe:swift",
>           "permissions": "full-control"}],
>   "keys": [
>         { "user": "johndoe",
>           "access_key": "P4OJFBD7X99YZKEXVQV2",
>           "secret_key": "Qzn3PmL85wkQScoYJB1ksLF4DJi7owJNcy9Hvntv"}],
>   "swift_keys": [],
>   "caps": []}

Now I create a Swift key:
> root@bnedevmn0:/var/lib/ceph/radosgw/ceph-radosgw.gateway# 
> radosgw-admin key create --subuser=johndoe:swift --key-type=swift
> 2013-05-27 11:41:25.550454 7f9bfa11e700  0 -- :/1019498 >> 
> 10.87.168.252:6789/0 pipe(0x18b8400 sd=3 :0 s=1 pgs=0 cs=0 l=1).fault {
"user_id": "johndoe",
>   "display_name": "John Doe",
>   "email": "john@xxxxxxxxxxx",
>   "suspended": 0,
>   "max_buckets": 1000,
>   "auid": 0,
>   "subusers": [
>         { "id": "johndoe:swift",
>           "permissions": "full-control"}],
>   "keys": [
>         { "user": "johndoe",
>           "access_key": "P4OJFBD7X99YZKEXVQV2",
>           "secret_key": "Qzn3PmL85wkQScoYJB1ksLF4DJi7owJNcy9Hvntv"}],
>   "swift_keys": [
>         { "user": "johndoe:swift",
>           "secret_key": ""}],
>   "caps": []}


... .Ooookay, so the key is the empty string?
> Auth version 1.0 requires ST_AUTH, ST_USER, and ST_KEY environment 
> variables to be set or overridden with -A, -U, or -K.
> 
> Auth version 2.0 requires OS_AUTH_URL, OS_USERNAME, OS_PASSWORD, and 
> OS_TENANT_NAME OS_TENANT_ID to be set or overridden with 
> --os-auth-url, --os-username, --os-password, --os-tenant-name or
os-tenant-id.
> root@bnedevmn0:~# swift -V 1.0 -A http://bnedevcm.vrt.int/auth/ -U 
> johndoe:swift -K '' list

Nope.

If I uncomment the Keystone-related lines in ceph.conf... I don't get very
far trying to connect to Swift either:

> root@bnedevmn0:/var/lib/ceph/radosgw/ceph-radosgw.gateway# swift 
> --verbose --debug --os-auth-url http://bnedevcm.vrt.int:35357/v2.0 
> --os-username username --os-password password --os-tenant-name 
> tenant-name list
> REQ: curl -i http://bnedevcm.vrt.int:35357/v2.0/tokens -X POST -H
"Content-Type: application/json" -H "User-Agent: python-keystoneclient"
> DEBUG:keystoneclient.client:REQ: curl -i
http://bnedevcm.vrt.int:35357/v2.0/tokens -X POST -H "Content-Type:
application/json" -H "User-Agent: python-keystoneclient"
> REQ BODY: {"auth": {"tenantName": "tenant-name", 
> "passwordCredentials": {"username": "username", "password": 
> "password"}}}
> 
> DEBUG:keystoneclient.client:REQ BODY: {"auth": {"tenantName": 
> "tenant-name", "passwordCredentials": {"username": "username", 
> "password": "password"}}}
> 
> INFO:urllib3.connectionpool:Starting new HTTP connection (1): 
> bnedevcm.vrt.int DEBUG:urllib3.connectionpool:"POST /v2.0/tokens 
> HTTP/1.1" 200 6230
> RESP: [200] {'date': 'Mon, 27 May 2013 01:47:27 GMT', 'content-type': 
> 'application/json', 'content-length': '6230', 'vary': 'X-Auth-Token'} 
> RESP BODY: {"access": {"token": {"issued_at": 
> "2013-05-27T01:47:27.620430", "expires": "2013-05-28T01:47:27Z", "id": 
> "MIIK...p6bo", "tenant": {"description": "VRT Network", "enabled": 
> true, "id": "4fa251b2f2d24e72a69ba472ec77f06f", "name": 
> "tenant-name"}}, "serviceCatalog": [{"endpoints": [{"adminURL": 
> "http://bnedevcm.vrt.int:8774/v2/4fa251b2f2d24e72a69ba472ec77f06f";, 
> "region": "bne-dev", "internalURL": 
> "http://bnedevcm.vrt.int:8774/v2/4fa251b2f2d24e72a69ba472ec77f06f";, 
> "id": "75ee4381419a42d296d70577433a6d5b", "publicURL": 
> "http://bnedevcm.vrt.int:8774/v2/4fa251b2f2d24e72a69ba472ec77f06f"}], 
> "endpoints_links": [], "type": "compute", "name": "nova"}, 
> {"endpoints": [{"adminURL": "http://bnedevcm.vrt.int:9292";, "region": 
> "bne-dev", "internalURL": "http://bnedevcm.vrt.int:9292";, "id": 
> "01c3a6c7f9a442c598cdde1891f94546", "publicURL": 
> "http://bnedevcm.vrt.int:9292"}], "endpoints_links": [], "type": 
> "image", "name": "glance"}, {"endpoints": [{"adminURL": 
> "http://bnedevcm.vrt.int
 :8776/v1/
4fa251b2f2d24e72a69ba472ec77f06f", "region": "bne-dev", "internalURL":
"http://bnedevcm.vrt.int:8776/v1/4fa251b2f2d24e72a69ba472ec77f06f";, "id":
"58c34b6e16554447aa880277f0365978", "publicURL":
"http://bnedevcm.vrt.int:8776/v1/4fa251b2f2d24e72a69ba472ec77f06f"}],
"endpoints_links": [], "type": "volume", "name": "volume"}, {"endpoints":
[{"adminURL": "http://bnedevcm.vrt.int:8773/services/Admin";, "region":
"bne-dev", "internalURL": "http://bnedevcm.vrt.int:8773/services/Cloud";,
"id": "31041d504c5143d6ac7103645f614229", "publicURL":
"http://bnedevcm.vrt.int:8773/services/Cloud"}], "endpoints_links": [],
"type": "ec2", "name": "ec2"}, {"endpoints": [{"adminURL":
"http://bnedevcm.vrt.int/swift/v1";, "region": "bne-dev", "internalURL":
"http://bnedevcm.vrt.int/swift/v1";, "id":
"00d58c396d01479b887bb401298496aa", "publicURL":
"http://bnedevcm.vrt.int/swift/v1"}], "endpoints_links": [], "type":
"object-store", "name": "swift"}, {"endpoints": [{"adminURL":
"http://bnedevcm.vrt.int:353  57/v2.0",
 "region": "bne-dev", "internalURL": "http://bnedevcm.vrt.int:5000/v2.0";,
"id": "087c1afebbf04bd19a644ffd5d6088d0", "publicURL":
"http://bnedevcm.vrt.int:5000/v2.0"}], "endpoints_links": [], "type":
"identity", "name": "keystone"}], "user": {"username": "username",
"roles_links": [], "id": "dc13dd9d013446b49ee3e564b990f5ec", "roles":
[{"name": "admin"}, {"name": "_member_"}], "name": "username"}, "metadata":
{"is_admin": 0, "roles": ["4c936ec7ec75426b98ef15cd6fd43eaf",
"9fe2ff9ee4384b1894a90878d3e92bab"]}}}
> 
> DEBUG:keystoneclient.client:RESP: [200] {'date': 'Mon, 27 May 2013 
> 01:47:27 GMT', 'content-type': 'application/json', 'content-length': 
> '6230', 'vary': 'X-Auth-Token'} RESP BODY: {"access": {"token": 
> {"issued_at": "2013-05-27T01:47:27.620430", "expires": 
> "2013-05-28T01:47:27Z", "id": "MIIK...p6bo", "tenant": {"description": 
> "VRT Network", "enabled": true, "id": 
> "4fa251b2f2d24e72a69ba472ec77f06f", "name": "tenant-name"}}, 
> "serviceCatalog": [{"endpoints": [{"adminURL": 
> "http://bnedevcm.vrt.int:8774/v2/4fa251b2f2d24e72a69ba472ec77f06f";, 
> "region": "bne-dev", "internalURL": 
> "http://bnedevcm.vrt.int:8774/v2/4fa251b2f2d24e72a69ba472ec77f06f";, 
> "id": "75ee4381419a42d296d70577433a6d5b", "publicURL": 
> "http://bnedevcm.vrt.int:8774/v2/4fa251b2f2d24e72a69ba472ec77f06f"}], 
> "endpoints_links": [], "type": "compute", "name": "nova"}, 
> {"endpoints": [{"adminURL": "http://bnedevcm.vrt.int:9292";, "region": 
> "bne-dev", "internalURL": "http://bnedevcm.vrt.int:9292";, "id": 
> "01c3a6c7f9a442c598cdde1891f94546", "publicURL": 
> "http://bnedevcm.vrt.int:9292"}], "endpoints_links": [], "type": 
> "image", "name": "glance"}, {"endpoints": [{"adminURL": 
> "http://bnedevcm.vrt.int
 :8776/v1/
4fa251b2f2d24e72a69ba472ec77f06f", "region": "bne-dev", "internalURL":
"http://bnedevcm.vrt.int:8776/v1/4fa251b2f2d24e72a69ba472ec77f06f";, "id":
"58c34b6e16554447aa880277f0365978", "publicURL":
"http://bnedevcm.vrt.int:8776/v1/4fa251b2f2d24e72a69ba472ec77f06f"}],
"endpoints_links": [], "type": "volume", "name": "volume"}, {"endpoints":
[{"adminURL": "http://bnedevcm.vrt.int:8773/services/Admin";, "region":
"bne-dev", "internalURL": "http://bnedevcm.vrt.int:8773/services/Cloud";,
"id": "31041d504c5143d6ac7103645f614229", "publicURL":
"http://bnedevcm.vrt.int:8773/services/Cloud"}], "endpoints_links": [],
"type": "ec2", "name": "ec2"}, {"endpoints": [{"adminURL":
"http://bnedevcm.vrt.int/swift/v1";, "region": "bne-dev", "internalURL":
"http://bnedevcm.vrt.int/swift/v1";, "id":
"00d58c396d01479b887bb401298496aa", "publicURL":
"http://bnedevcm.vrt.int/swift/v1"}], "endpoints_links": [], "type":
"object-store", "name": "swift"}, {"endpoints": [{"adminURL":
"http://bnedevcm.vrt.int:353  57/v2.0",
 "region": "bne-dev", "internalURL": "http://bnedevcm.vrt.int:5000/v2.0";,
"id": "087c1afebbf04bd19a644ffd5d6088d0", "publicURL":
"http://bnedevcm.vrt.int:5000/v2.0"}], "endpoints_links": [], "type":
"identity", "name": "keystone"}], "user": {"username": "username",
"roles_links": [], "id": "dc13dd9d013446b49ee3e564b990f5ec", "roles":
[{"name": "admin"}, {"name": "_member_"}], "name": "username"}, "metadata":
{"is_admin": 0, "roles": ["4c936ec7ec75426b98ef15cd6fd43eaf",
"9fe2ff9ee4384b1894a90878d3e92bab"]}}}
> 
> DEBUG:swiftclient:RESP STATUS: 500
> 
> DEBUG:swiftclient:RESP BODY: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 
> 2.0//EN"> <html><head>
> <title>500 Internal Server Error</title> </head><body> <h1>Internal 
> Server Error</h1> <p>The server encountered an internal error or 
> misconfiguration and was unable to complete your request.</p> 
> <p>Please contact the server administrator,  {email.address} and 
> inform them of the time the error occurred, and anything you might 
> have done that may have caused the error.</p> <p>More information 
> about this error may be available in the server error log.</p> 
> </body></html>

In the logs, I see the following a lot:
> 2013-05-27 11:39:14.731355 7f78d7123780  0 ceph version 0.61.2 
> (fea782543a844bb277ae94d3391788b76c5bee60), proc ess radosgw, pid 
> 18717
> 2013-05-27 11:39:14.739316 7f78d711f700  0 -- :/1018719 >> 
> 10.87.168.253:6789/0 pipe(0x1e3b400 sd=11 :0 s=1 pgs
> =0 cs=0 l=1).fault
> 2013-05-27 11:39:15.739714 7f79e601b700  0 -- 10.87.168.254:0/1018695 
> >> 10.87.168.253:6789/0 pipe(0x7f79d4001d
> 60 sd=11 :0 s=1 pgs=0 cs=0 l=1).fault
> 2013-05-27 11:39:17.742484 7f78c9eef700  2 garbage collection: start
> 2013-05-27 11:39:17.744752 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.3
> 2013-05-27 11:39:17.746848 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.4
> 2013-05-27 11:39:17.747902 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.5
> 2013-05-27 11:39:17.749974 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.6
> 2013-05-27 11:39:17.751092 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.7
> 2013-05-27 11:39:17.752034 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.8
> 2013-05-27 11:39:17.753101 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.9
> 2013-05-27 11:39:17.754179 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.10
> 2013-05-27 11:39:17.755119 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.11
> 2013-05-27 11:39:17.756389 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.12
> 2013-05-27 11:39:17.757334 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.13
> 2013-05-27 11:39:17.758525 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.14
> 2013-05-27 11:39:17.759683 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.15
> 2013-05-27 11:39:17.760809 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.16
> 2013-05-27 11:39:17.761988 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.17
> 2013-05-27 11:39:17.763021 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.18
> 2013-05-27 11:39:17.764159 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.19
> 2013-05-27 11:39:17.765249 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.20
> 2013-05-27 11:39:17.766123 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.21
> 2013-05-27 11:39:17.767190 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.22
> 2013-05-27 11:39:17.768283 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.23
> 2013-05-27 11:39:17.769322 7f78c9eef700  0 RGWGC::process() failed to 
> acquire lock on gc.24

and sometimes this too:
> 2013-05-27 11:39:44.829626 7f6a198f0700  0 RGWGC::process() failed to 
> acquire lock on gc.21
> 2013-05-27 11:39:44.830784 7f6a198f0700  0 RGWGC::process() failed to 
> acquire lock on gc.22
> 2013-05-27 11:39:44.831758 7f6a198f0700 -1 *** Caught signal 
> (Segmentation fault) **  in thread 7f6a198f0700
> 
>  ceph version 0.61.2 (fea782543a844bb277ae94d3391788b76c5bee60)
>  1: /usr/bin/radosgw() [0x4f19da]
>  2: (()+0xfcb0) [0x7f6a24db9cb0]
>  3: (()+0x1499d6) [0x7f6a238d59d6]
>  4: (std::string::_Rep::_M_clone(std::allocator<char> const&, unsigned 
> long)+0x72) [0x7f6a240fc842]
>  5: (std::basic_string<char, std::char_traits<char>, 
> std::allocator<char> >::basic_string(std::string const&)+0x3c) 
> [0x7f6a240fcf5c]
>  6: (RGWGC::process(int, int)+0x2c) [0x60106c]
>  7: (RGWGC::process()+0x5e) [0x601f3e]
>  8: (RGWGC::GCWorker::entry()+0x213) [0x602173]
>  9: (()+0x7e9a) [0x7f6a24db1e9a]
>  10: (clone()+0x6d) [0x7f6a2387fccd]
>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.
> 
> --- begin dump of recent events ---
>   -172> 2013-05-27 11:39:44.788653 7f6a26b24780  5 asok(0x2442660)
register_command perfcounters_dump hook 0x2443a50
>   -171> 2013-05-27 11:39:44.788684 7f6a26b24780  5 asok(0x2442660)
register_command 1 hook 0x2443a50
>   -170> 2013-05-27 11:39:44.788691 7f6a26b24780  5 asok(0x2442660)
register_command perf dump hook 0x2443a50
>   -169> 2013-05-27 11:39:44.788723 7f6a26b24780  5 asok(0x2442660)
register_command perfcounters_schema hook 0x2443a50
>   -168> 2013-05-27 11:39:44.788729 7f6a26b24780  5 asok(0x2442660)
register_command 2 hook 0x2443a50
>   -167> 2013-05-27 11:39:44.788733 7f6a26b24780  5 asok(0x2442660)
register_command perf schema hook 0x2443a50
>   -166> 2013-05-27 11:39:44.788740 7f6a26b24780  5 asok(0x2442660)
register_command config show hook 0x2443a50
>   -165> 2013-05-27 11:39:44.788745 7f6a26b24780  5 asok(0x2442660)
register_command config set hook 0x2443a50
>   -164> 2013-05-27 11:39:44.788750 7f6a26b24780  5 asok(0x2442660)
register_command log flush hook 0x2443a50
>   -163> 2013-05-27 11:39:44.788754 7f6a26b24780  5 asok(0x2442660)
register_command log dump hook 0x2443a50
>   -162> 2013-05-27 11:39:44.788758 7f6a26b24780  5 asok(0x2442660)
register_command log reopen hook 0x2443a50
>   -161> 2013-05-27 11:39:44.791129 7f6a26b24780  0 ceph version 0.61.2
(fea782543a844bb277ae94d3391788b76c5bee60), process radosgw, pid 18902
>   -160> 2013-05-27 11:39:44.791833 7f6a26b24780  1 finished
global_init_daemonize
>   -159> 2013-05-27 11:39:44.797687 7f6a26b24780 10 monclient(hunting):
build_initial_monmap
>   -158> 2013-05-27 11:39:44.797843 7f6a26b24780  1 librados: starting msgr
at :/0
>   -157> 2013-05-27 11:39:44.797858 7f6a26b24780  1 librados: starting
objecter
>   -156> 2013-05-27 11:39:44.797905 7f6a26b24780  1 -- :/0 messenger.start
>   -155> 2013-05-27 11:39:44.797937 7f6a26b24780  1 librados: setting
wanted keys
>   -154> 2013-05-27 11:39:44.797943 7f6a26b24780  1 librados: calling
monclient init
>   -153> 2013-05-27 11:39:44.797948 7f6a26b24780 10 monclient(hunting):
init
>   -152> 2013-05-27 11:39:44.797966 7f6a26b24780  5 adding auth protocol:
cephx
>   -151> 2013-05-27 11:39:44.797975 7f6a26b24780 10 monclient(hunting):
auth_supported 2 method cephx
>   -150> 2013-05-27 11:39:44.798318 7f6a26b24780  2 auth: KeyRing::load:
loaded key file /etc/ceph/keyring.radosgw.gateway
>   -149> 2013-05-27 11:39:44.798391 7f6a26b24780 10 monclient(hunting):
_reopen_session
>   -148> 2013-05-27 11:39:44.798439 7f6a26b24780 10 monclient(hunting):
_pick_new_mon picked mon.noname-a con 0x249d660 addr 10.87.168.254:6789/0
>   -147> 2013-05-27 11:39:44.798467 7f6a26b24780 10 monclient(hunting): 
> _send_mon_message to mon.noname-a at 10.87.168.254:6789/0

The following packages are installed:
> root@bnedevmn0:~# dpkg -l radosgw* ceph* 
> Desired=Unknown/Install/Remove/Purge/Hold
> | 
> |Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Tr
> |ig-pend / Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
> ||/ Name                  Version               Description
> +++-=====================-=====================-======================
> +++====================================
> ii  ceph                  0.61.2-1precise       distributed storage and
file system
> un  ceph-client-tools     <none>                (no description available)
> ii  ceph-common           0.61.2-1precise       common utilities to mount
and interact with a ceph storage
> ii  ceph-fs-common        0.61.2-1precise       common utilities to mount
and interact with a ceph file sy
> un  ceph-fuse             <none>                (no description available)
> ii  ceph-mds              0.61.2-1precise       metadata server for the
ceph distributed file system
> ii  radosgw               0.61.2-1precise       REST gateway for RADOS
distributed object store
> ii  radosgw-dbg           0.61.2-1precise       debugging symbols for
radosgw

Is there something I missed in setting this up?
-- 
##   -,-''''-. ###### Stuart Longland, Contractor
##.  :  ##   :   ##   38b Douglas Street
 ## #  ## -'`   .#'   Milton, QLD, 4064
 '#'  *'   '-.  *'    http://www.vrt.com.au
     S Y S T E M S    T: +61 7 3535 9619   F: +61 7 3535 9699

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux