Re: RBD boot from volume in OpenStack with Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Your steps are wrong, you only create a volume to attach a RBD device,
not to boot from it. There is nothing to boot to with your volume
since it's empty.

First you need to create and register a new RAW image to glance like so:

# glance image-create --name centos6_min_raw --disk-format raw
--container-format ovf --file /var/images/centos6_min_raw --is-public
True

+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | b23670a77cdb3c7d76cc30a40709d1a7     |
| container_format | ovf                                  |
| created_at       | 2012-10-22T21:55:30                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | raw                                  |
| id               | 6f0ba2c7-0c72-4d1a-b35c-6e833ebbadaa |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | centos6_min_raw                      |
| owner            | 0eec5c34a7a24a7a8ddad27cb81d2706     |
| protected        | False                                |
| size             | 10737418240                          |
| status           | active                               |
| updated_at       | 2012-10-22T22:01:23                  |
+------------------+--------------------------------------+

Then you need to create a volume from this image:

# cinder create --image-id 6f0ba2c7-0c72-4d1a-b35c-6e833ebbadaa
--display-name boot-from-rbd 15

Check what happen:

# cinder list
+--------------------------------------+-------------+---------------+------+-------------+-------------+
|                  ID                  |    Status   |  Display Name |
Size | Volume Type | Attached to |
+--------------------------------------+-------------+---------------+------+-------------+-------------+
| 50ba4f6a-a8c5-4f9a-9beb-78270cf7bb93 | downloading | boot-from-rbd |
 15  |     None    |             |
+--------------------------------------+-------------+---------------+------+-------------+-------------+

After a couple of seconds you should have:

# cinder list
+--------------------------------------+-----------+------------------+-------+-------------+-------------+
|                  ID                  |   Status  | Display Name
| Size  | Volume Type | Attached to |
+--------------------------------------+-----------+------------------+-------+-------------+-------------+
| 61a92552-f7f8-4bf2-bc4f-44a8f81c9a53 | available |  boot-from-rbd
|  15   |     None    |             |
+--------------------------------------+-----------+------------------+-------+-------------+-------------+

Then you're ready to boot from it now :). This is roughly what you
need to do, for more consult the ceph wiki:
http://ceph.com/docs/master/rbd/rbd-openstack/

--
Regards,
Sébastien Han.


On Thu, Feb 14, 2013 at 3:00 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
> (Bringing it back to the list)
>
> On 02/14/2013 02:54 PM, Khdher Omar wrote:
>>
>> HI ,
>>
>> Thanks for response. In fact , I am making tests without usage of cephx
>> or any other authentication mecanism.
>> I am looking for a proof of concept with basic boot from volume.
>>
>> is cephx required for that issue ?
>>
>
> No, cephx is not required. It's just that I noticed it.
>
> Looking at the XML a bit further I don't see a monitor host defined, there
> should be something like:
>
> <source protocol='rbd' name='rbd/3b987fba-5153-4e51-b346-cfe20ecd9d27'>
>   <host name='ip.of.monitor.host' port='6789'/>
> </source>
>
> But I think that librbd/librados in this case reads the /etc/ceph/ceph.conf
> instead of getting all the arguments through libvirt.
>
> On the compute node, do things links:
> $ rbd ls
> $ rbd info <image name>
>
> Wido
>
>> Omar
>>
>> ------------------------------------------------------------------------
>> *De :* Wido den Hollander <wido@xxxxxxxx>
>> *À :* ceph-users@xxxxxxxxxxxxxx
>> *Envoyé le :* Jeudi 14 février 2013 14h45
>> *Objet :* Re:  RBD boot from volume in OpenStack with Ceph
>>
>>
>> Hi,
>>
>> On 02/14/2013 12:44 PM, Khdher Omar wrote:
>>  > The issue about OpenStack related with Ceph. Thus, I was following
>>  > documentation in Ceph website on how integrate Ceph with existing
>>  > OpenStack environment.
>>  > I used Folsom with two nodes. I just configured Ceph with Cinder and
>>  > Glance and I could make RBD as storage backend for Glance. Still with
>>  > volumes that i was looking for to boot from it.
>>  > I proceed afterwards by :
>>  > 1. create a new rbd pool on ceph;
>>  > 2. create new volume [cinder create volume 2 for example];
>>  > 3. Check if the new volume is added to ceph pool created --> success
>>  > 4. Finally boot form volume :
>>  > nova boot --flavor m1.tiny --image precise-ceph --block_device_mapping
>>  > vda=ID_VOL:::0 --security_groups=default boot-from-rbd
>>  > last step failed : Instance created but with Error Status.
>>  > We could check inside the
>>  > /var/lib/nova/instance/instance_id/lilbvirt.xml an intersting section:
>>  > <disk type="network" device="disk">
>>  >        <driver name="qemu" type="raw" cache="none"/>
>>  >        <source protocol="rbd"
>>  > name="rbd/volume-e5d5f756-7a9d-47b1-a59d-504c9cf582d5"/>
>>  >        <target bus="virtio" dev="vda"/>
>>  >      </disk>
>>
>> I'm not so familiar with OpenStack, but looking at the XML from libvirt
>> I'm missing a "secret".
>>
>> Are you using cephx? If so, did you define a libvirt secret and
>> configure the UUID in Nova?
>>
>> Wido
>>
>>  > basically it seems that the opensstack recognized for RBD using the
>>  > virtio driver.
>>  > We could check the log file 'nova-compute.log' the follwing error line:
>>  > self._set_instance_error_state(context, instance['uuid'])
>>  > TRACE nova.openstack.common.rpc.amqp libvirtError: internal error
>>  > Process exited while reading console log output: char device redirected
>>  > to /dev/pts/6
>>  > TRACE nova.openstack.common.rpc.amqp kvm: -drive
>>  >
>>
>> file=rbd:rbd/volume-e5d5f756-7a9d-47b1-a59d-504c9cf582d5,if=none,id=drive-virtio-disk1,format=raw,cache=none:
>>  > error connecting
>>  > it seems that the if field shows none.
>>  > Is there any explanation that KVM can't read that volume ?
>>  > Thanks for any response !
>>  >
>>  >
>>  > _______________________________________________
>>  > ceph-users mailing list
>>  > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>>
>>  > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>  >
>>
>>
>> --
>> Wido den Hollander
>> 42on B.V.
>>
>> Phone: +31 (0)20 700 9902
>> Skype: contact42on
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux