That was it! Thank you so much for your help, Marko! What a silly thing for me to miss! <3 Trilliams
Sent from my iPhone Sorry, rbd_user = volumes Not client.volumes
---- On Mon, 19 Jun 2017 21:09:38 -0400 marko@xxxxxxxxxxxxxx wrote ----
Hi Nichole,
Yeah, your setup looks is ok, so the only thing here could be an auth issue. So I went through the config again and I see you have set the client.volumes ceph user with rwx permissions on the volumes pool.
In your cinder.conf the setup is:
rbd_user = cinder
Unless the cinder ceph user also exists, this is probably incorrectly set and I would say you would need to change that setting to:
rbd_user = client.volumes
Regards, Marko ---- On Mon, 19 Jun 2017 20:50:47 -0400 tribecca@xxxxxxxxxx wrote ---- Hi Marko!
Here’s my details:
OpenStack Newton deployed with PackStack [controller + network node} Ceph Kraken 3-node setup deployed with ceph-ansible
# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.3 (Maipo)
# ceph --version ceph version 11.2.0 (f223e27eeb35991352ebc1f67423d4ebc252adb7)
# rpm -qa | grep librados libradosstriper1-11.2.0-0.el7.x86_64 librados2-11.2.0-0.el7.x86_64
# nova-manage --version 14.0.3
If it matters, both glance & nova connected without a hitch. It’s just cinder that’s causing a headache.
Hi Nichole,
Since your config is ok.
I'm going to need more details on the OpenStack release, the hypervisor, linux and librados versions.
You could also test if you can try and monut a volume from your os and/or hypervisor and the machine that runs the cinder volume service to start with.
Regards, Marko ---- On Mon, 19 Jun 2017 17:59:48 -0400 tribecca@xxxxxxxxxx wrote ---- Hi Marko,
Here’s’ my ceph config:
[ceph] | | volume_driver = cinder.volume.drivers.rbd.RBDDriver | | volume_backend_name = ceph | | rbd_pool = volumes | | rbd_ceph_conf = /etc/ceph/ceph.conf | | rbd_flatten_volume_from_snapshot = false | | rbd_max_clone_depth = 5 | | rbd_store_chunk_size = 4 | | rados_connect_timeout = -1 | | rbd_user = cinder | | rbd_secret_uuid = c80d6505-260c-48c1-a248-7144cd5d5aab | | filter_function = "volume.size >= 2" |
Setting logging to “debug” doesn’t seem to produce any new information. Here’s a snippet of /var/log/cinder/volume.log: 2017-06-19 16:54:45.056 9797 INFO cinder.volume.manager [req-e556d559-c484-4edf-a458-5afbafcb8e39 - - - - -] Initializing RPC dependent components of volume driver RBDDriver (1.2.0) 2017-06-19 16:54:45.056 9797 ERROR cinder.utils [req-e556d559-c484-4edf-a458-5afbafcb8e39 - - - - -] Volume driver RBDDriver not initialized 2017-06-19 16:54:45.057 9797 ERROR cinder.volume.manager [req-e556d559-c484-4edf-a458-5afbafcb8e39 - - - - -] Cannot complete RPC initialization because driver isn't initialized properly. 2017-06-19 16:54:55.063 9797 ERROR cinder.service [-] Manager for service cinder-volume controller.trilliams.info@ceph is reporting problems, not sending heartbeat. Service will appear "down". 2017-06-19 16:56:34.065 9797 WARNING cinder.volume.manager [req-1309a49a-d5c9-45dd-b277-36cb4ac09dd8 - - - - -] Update driver status failed: (config name ceph) is uninitialized.
I’ve added the entirety of /etc/cinder/cinder.conf to my gist, & thank you all for any help you can provide.
Hi Nicole,
I can help, I have been working on my own openstack connected to ceph - can you send over the config in your /etc/cinder/cinder.conf file - especially the rbd relevant section starting with:
volume_driver = cinder.volume.drivers.rbd.RBDDriver
Also, make sure your rbd_secret_uuid matches the client volume secret you created.
Regards,
Marko Sluga
Independent Trainer
<1487020143233.jpg>
T: +1 (647) 546-4365
L + M Consulting Inc.
Ste 212, 2121 Lake Shore Blvd W
M8E 4E9, Etobicoke, ON
|