On 9/10/2013 4:50 PM, Darren Birkett wrote:
Hi Mike,
That led me to realise what the issue was. My cinder (volumes) client
did not have the correct perms on the images pool. I ran the following
to update the perms for that client:
ceph auth caps client.volumes mon 'allow r' osd 'allow class-read
object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
...and was then able to successfully boot an instance from a cinder
volume that was created by cloning a glance image from the images pool!
Glad you found it. This has been a sticking point for several people.
One last question: I presume the fact that the 'volume_image_metadata'
field is not populated when cloning a glance image into a cinder volume
is a bug? It means that the cinder client doesn't show the volume as
bootable, though I'm not sure what other detrimental effect it actually
has (clearly the volume can be booted from).
I think you are talking about data in the cinder table of your database
backend (mysql?). I don't have 'volume_image_metadata' at all here. I
don't think this is the issue.
To create a Cinder volume from Glance, I do something like:
cinder --os_tenant_name MyTenantName create --image-id
00e0042e-d007-400a-918a-d5e00cea8b0f --display-name MyVolumeName 40
I can then spin up an instance backed by MyVolumeName and boot as expected.
Thanks
Darren
On 10 September 2013 21:04, Darren Birkett <darren.birkett@xxxxxxxxx
<mailto:darren.birkett@xxxxxxxxx>> wrote:
Hi Mike,
Thanks - glad to hear it definitely works as expected! Here's my
client.glance and client.volumes from 'ceph auth list':
client.glance
key: AQAWFi9SOKzAABAAPV1ZrpWkx72tmJ5E7nOi3A==
caps: [mon] allow r
caps: [osd] allow rwx pool=images, allow class-read object_prefix
rbd_children
client.volumes
key: AQAnAy9ScPB4IRAAtxD/V1rDciqFiT9AMPPr+A==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=volumes
Thanks
Darren
On 10 September 2013 20:08, Mike Dawson <mike.dawson@xxxxxxxxxxxx
<mailto:mike.dawson@xxxxxxxxxxxx>> wrote:
Darren,
I can confirm Copy on Write (show_image_direct_url = True) does
work in Grizzly.
It sounds like you are close. To check permissions, run 'ceph
auth list', and reply with "client.images" and "client.volumes"
(or whatever keys you use in Glance and Cinder).
Cheers,
Mike Dawson
On 9/10/2013 10:12 AM, Darren Birkett wrote:
Hi All,
tl;dr - does glance/rbd and cinder/rbd play together nicely
in grizzly?
I'm currently testing a ceph/rados back end with an openstack
installation. I have the following things working OK:
1. cinder configured to create volumes in RBD
2. nova configured to boot from RBD backed cinder volumes
(libvirt UUID
secret set etc)
3. glance configured to use RBD as a back end store for images
With this setup, when I create a bootable volume in cinder,
passing an
id of an image in glance, the image gets downloaded,
converted to raw,
and then created as an RBD object and made available to
cinder. The
correct metadata field for the cinder volume is populated
(volume_image_metadata) and so the cinder client marks the
volume as
bootable. This is all fine.
If I want to take advantage of the fact that both glance
images and
cinder volumes are stored in RBD, I can add the following
flag to the
glance-api.conf:
show_image_direct_url = True
This enables cinder to see that the glance image is stored
in RBD, and
the cinder rbd driver then, instead of downloading the image and
creating an RBD image from it, just issues an 'rbd clone'
command (seen
in the cinder-volume.log):
rbd clone --pool images --image
dcb2f16d-a09d-4064-9198-__1965274e214d
--snap snap --dest-pool volumes --dest
volume-20987f9d-b4fb-463d-__8b8f-fa667bd47c6d
This is all very nice, and the cinder volume is available
immediately as
you'd expect. The problem is that the metadata field is not
populated
so it's not seen as bootable. Even manually populating this
field
leaves the volume unbootable. The volume can not even be
attached to
another instance for inspection.
libvirt doesn't seem to be able to access the rbd device. From
nova-compute.log:
qemu-system-x86_64: -drive
file=rbd:volumes/volume-__20987f9d-b4fb-463d-8b8f-__fa667bd47c6d:id=volumes:key=__AQAnAy9ScPB4IRAAtxD/__V1rDciqFiT9AMPPr+A==:auth___supported=cephx\;none,if=none,__id=drive-virtio-disk0,format=__raw,serial=20987f9d-b4fb-463d-__8b8f-fa667bd47c6d,cache=none:
error reading header from
volume-20987f9d-b4fb-463d-__8b8f-fa667bd47c6d
qemu-system-x86_64: -drive
file=rbd:volumes/volume-__20987f9d-b4fb-463d-8b8f-__fa667bd47c6d:id=volumes:key=__AQAnAy9ScPB4IRAAtxD/__V1rDciqFiT9AMPPr+A==:auth___supported=cephx\;none,if=none,__id=drive-virtio-disk0,format=__raw,serial=20987f9d-b4fb-463d-__8b8f-fa667bd47c6d,cache=none:
could not open disk image
rbd:volumes/volume-20987f9d-__b4fb-463d-8b8f-fa667bd47c6d:__id=volumes:key=__AQAnAy9ScPB4IRAAtxD/__V1rDciqFiT9AMPPr+A==:auth___supported=cephx\;none:
Operation not permitted
It's almost like a permission issue, but my ceph/rbd
knowledge is still
fledgeling.
I know that the cinder rbd driver has been rewritten to use
librbd in
havana, and I'm wondering if this will change any of this
behaviour?
I'm also wondering if anyone has actually got this
working with
grizzly, and how?
Many thanks
Darren
_________________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/__listinfo.cgi/ceph-users-ceph.__com
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com