Re: RBD clone for OpenStack Nova ephemeral volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
The patch series that implemented clone operation for RBD backed
ephemeral volumes in Nova did not make it into Icehouse. We have tried
our best to help it land, but it was ultimately rejected. Furthermore,
an additional requirement was imposed to make this patch series
dependent on full support of Glance API v2 across Nova (due to its
dependency on direct_url that was introduced in v2).

You can find the most recent discussion of this patch series in the
FFE (feature freeze exception) thread on openstack-dev ML:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029127.html

As I explained in that thread, I believe this feature is essential for
using Ceph as a storage backend for Nova, so I'm going to try and keep
it alive outside of OpenStack mainline until it is allowed to land.

I have created rbd-ephemeral-clone branch in my nova repo fork on GitHub:
https://github.com/angdraug/nova/tree/rbd-ephemeral-clone

I will keep it rebased over nova master, and will create an
rbd-ephemeral-clone-stable-icehouse to track the same patch series
over nova stable/icehouse once it's branched. I also plan to make sure
that this patch series is included in Mirantis OpenStack 5.0 which
will be based on Icehouse.

If you're interested in this feature, please review and test. Bug
reports and patches are welcome, as long as their scope is limited to
this patch series and is not applicable for mainline OpenStack.

Thanks for taking this on Dmitry! Having rebased those patches many
times during icehouse, I can tell you it's often not trivial.

Do you think the imagehandler-based approach is best for Juno? I'm
leaning towards the older way [1] for simplicity of review, and to
avoid using glance's v2 api by default. I doubt that full support for
v2 will land very fast in nova, although I'd be happy to be proven
wrong.

Josh

[1] https://review.openstack.org/#/c/46879/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux