Re: create volume from an image

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks josh,the problem is solved by updating ceph in the glance node.

发自我的 iPhone

在 2013-3-20,14:59,"Josh Durgin" <josh.durgin@xxxxxxxxxxx> 写道:

> On 03/19/2013 11:03 PM, Chen, Xiaoxi wrote:
>> I think Josh may be the right man for this question ☺
>> 
>> To be more precious, I would like to add more words about the status:
>> 
>> 1. We have configured “show_image_direct_url= Ture” in Glance, and from the Cinder-volume’s log, we can make sure we have got a direct_url , for example.
>> image_id 6565d775-553b-41b6-9d5e-ddb825677706
>> image_location rbd://6565d775-553b-41b6-9d5e-ddb825677706
>> 2. In the _is_cloneable function, it tries to “_parse_location” the direct_url (rbd://6565d775-553b-41b6-9d5e-ddb825677706) into 4 parts : fsid,pool,volume,snapshot . Since the direct_url passed from Glance doesn’t provide fsid ,pool and snapshot info, the parse is failed and _is_cloneable return false, which will finally drop the request to RBDDriver::copy_image_to_volume.
> 
> This is working as expected - cloning was introduced in format 2 rbd
> volumes, available in bobtail but not argonaut. When the image is
> uploaded to glance, it is created as format 2 and a snapshot of it is
> taken and protected from deletion if the installed version of librbd
> (via the python-ceph package) supports it.
> 
> The location reported will be just the image id for format 1 images.
> For format 2 images, it has 4 parts, as you noted. You may need to
> update python-ceph and librbd1 on the node running glance-api
> and re-upload the image so it will be created as format 2, rather
> than the current image which is format 1, and thus cannot be cloned.
> 
>> 3. In  Cinder/volume/driver.py,RBDDriver::copy_image_to_volume, we have seem this note:
>>  # TODO(jdurgin): replace with librbd  this is a temporary hack, since rewriting this driver to use librbd would take too long
>>          And in this function, the cinder RBD driver download the whole image from Glance into a temp file in local Filesystem, then use rbd import to import the temp file into a RBD volume.
> 
> That note is about needing to remove the volume before importing the
> data, instead of just writing to it directly with librbd.
>        
>>        This is absolutely not what we want (zero copy and CoW), so we digging into the _is_cloneable function
>> 
>> Seems the straightforward way to solve 2 )  write a patch for glance that adding more infos in the direct_url, but I am not sure if it’s possible for ceph to clone a RBD from pool A to pool B?
> 
> Cloning from one pool to another is certainly supported. If you're
> interested in more details about cloning, check out the command line
> usage [1] and internal design [2].
> 
> Josh
> 
> [1] https://github.com/ceph/ceph/blob/master/doc/rbd/rbd-snapshot.rst#layering
> [2] https://github.com/ceph/ceph/blob/master/doc/dev/rbd-layering.rst
> 
>> From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Li, Chen
>> Sent: 2013年3月20日 12:57
>> To: 'ceph-users@xxxxxxxxxxxxxx'
>> Subject:  create volume from an image
>> 
>> I'm using Ceph RBD for both Cinder and Glance. Cinder and Glance are installed in two machines.
>> I have get information from many place that when cinder and glance both using Ceph RBD, then no real data transmit will happen because of copy on write.
>> But the truth is when i run the command:
>> cinder create --image-id 6565d775-553b-41b6-9d5e-ddb825677706 --display-name test 3
>> I can still get network data traffic between cinder and glance.
>> And I check the cinder code, the image_location is None (cinder/volume/manager.py), which makes cinder will failed running cloned = self.driver.clone_image(volume_ref, image_location).
>> Is this a OpenStack (cinder or glance )bug  ?
>> Or I have miss and configuration?
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux