Re: status of glance/cinder/nova integration in openstack grizzly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mike,

Thanks - glad to hear it definitely works as expected!  Here's my client.glance and client.volumes from 'ceph auth list':

client.glance
key: AQAWFi9SOKzAABAAPV1ZrpWkx72tmJ5E7nOi3A==
caps: [mon] allow r
caps: [osd] allow rwx pool=images, allow class-read object_prefix rbd_children
client.volumes
key: AQAnAy9ScPB4IRAAtxD/V1rDciqFiT9AMPPr+A==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes

Thanks
Darren


On 10 September 2013 20:08, Mike Dawson <mike.dawson@xxxxxxxxxxxx> wrote:
Darren,

I can confirm Copy on Write (show_image_direct_url = True) does work in Grizzly.

It sounds like you are close. To check permissions, run 'ceph auth list', and reply with "client.images" and "client.volumes" (or whatever keys you use in Glance and Cinder).

Cheers,
Mike Dawson



On 9/10/2013 10:12 AM, Darren Birkett wrote:
Hi All,

tl;dr - does glance/rbd and cinder/rbd play together nicely in grizzly?

I'm currently testing a ceph/rados back end with an openstack
installation.  I have the following things working OK:

1. cinder configured to create volumes in RBD
2. nova configured to boot from RBD backed cinder volumes (libvirt UUID
secret set etc)
3. glance configured to use RBD as a back end store for images

With this setup, when I create a bootable volume in cinder, passing an
id of an image in glance, the image gets downloaded, converted to raw,
and then created as an RBD object and made available to cinder.  The
correct metadata field for the cinder volume is populated
(volume_image_metadata) and so the cinder client marks the volume as
bootable.  This is all fine.

If I want to take advantage of the fact that both glance images and
cinder volumes are stored in RBD, I can add the following flag to the
glance-api.conf:

show_image_direct_url = True

This enables cinder to see that the glance image is stored in RBD, and
the cinder rbd driver then, instead of downloading the image and
creating an RBD image from it, just issues an 'rbd clone' command (seen
in the cinder-volume.log):

rbd clone --pool images --image dcb2f16d-a09d-4064-9198-1965274e214d
--snap snap --dest-pool volumes --dest
volume-20987f9d-b4fb-463d-8b8f-fa667bd47c6d

This is all very nice, and the cinder volume is available immediately as
you'd expect.  The problem is that the metadata field is not populated
so it's not seen as bootable.  Even manually populating this field
leaves the volume unbootable.  The volume can not even be attached to
another instance for inspection.

libvirt doesn't seem to be able to access the rbd device. From
nova-compute.log:

qemu-system-x86_64: -drive
file=rbd:volumes/volume-20987f9d-b4fb-463d-8b8f-fa667bd47c6d:id=volumes:key=AQAnAy9ScPB4IRAAtxD/V1rDciqFiT9AMPPr+A==:auth_supported=cephx\;none,if=none,id=drive-virtio-disk0,format=raw,serial=20987f9d-b4fb-463d-8b8f-fa667bd47c6d,cache=none:
error reading header from volume-20987f9d-b4fb-463d-8b8f-fa667bd47c6d

qemu-system-x86_64: -drive
file=rbd:volumes/volume-20987f9d-b4fb-463d-8b8f-fa667bd47c6d:id=volumes:key=AQAnAy9ScPB4IRAAtxD/V1rDciqFiT9AMPPr+A==:auth_supported=cephx\;none,if=none,id=drive-virtio-disk0,format=raw,serial=20987f9d-b4fb-463d-8b8f-fa667bd47c6d,cache=none:
could not open disk image
rbd:volumes/volume-20987f9d-b4fb-463d-8b8f-fa667bd47c6d:id=volumes:key=AQAnAy9ScPB4IRAAtxD/V1rDciqFiT9AMPPr+A==:auth_supported=cephx\;none:
Operation not permitted

It's almost like a permission issue, but my ceph/rbd knowledge is still
fledgeling.

I know that the cinder rbd driver has been rewritten to use librbd in
havana, and I'm wondering if this will change any of this behaviour?
  I'm also wondering if anyone has actually got this working with
grizzly, and how?

Many thanks
Darren



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux