RBD clone for OpenStack Nova ephemeral volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are currently starting to set up a new Icehouse/Ceph based cluster and will help to get this patch in shape as well. 

I am currently collecting the information needed that allow us to patch Nova and I have this: https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse on my list of patches to apply. Is there new code for the rbd-clone-image-handler blueprint, or should I use the one mentioned above?

Also, are there other patches that would need to be applied for the full Icehouse/Ceph integration?

cheers
jc

On 01.05.2014, at 01:23, Dmitry Borodaenko <dborodaenko at mirantis.com> wrote:

> I've re-proposed the rbd-clone-image-handler blueprint via nova-specs:
> https://review.openstack.org/91486
> 
> In other news, Sebastien has helped me test the most recent
> incarnation of this patch series and it seems to be usable now. With
> an important exception of live migrations of VMs with RBD backed
> ephemeral drives, which will need a bit more work and a separate
> blueprint.
> 
> On Mon, Apr 28, 2014 at 7:44 PM, Dmitry Borodaenko
> <dborodaenko at mirantis.com> wrote:
>> I have decoupled the Nova rbd-ephemeral-clone branch from the
>> multiple-image-location patch, the result can be found at the same
>> location on GitHub as before:
>> https://github.com/angdraug/nova/tree/rbd-ephemeral-clone
>> 
>> I will keep rebasing this over Nova master, I also plan to update the
>> rbd-clone-image-handler blueprint and publish it to nova-specs so that
>> the patch series could be proposed for Juno.
>> 
>> Icehouse backport of this branch is here:
>> https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse
>> 
>> I am not going to track every stable/icehouse commit with this branch,
>> instead, I will rebase it over stable release tags as they appear.
>> Right now it's based on tag:2014.1.
>> 
>> For posterity, I'm leaving the multiple-image-location patch rebased
>> over current Nova master here:
>> https://github.com/angdraug/nova/tree/multiple-image-location
>> 
>> I don't plan on maintaining multiple-image-location, just leaving it
>> out there to save some rebasing effort for whoever decides to pick it
>> up.
>> 
>> -DmitryB
>> 
>> On Fri, Mar 21, 2014 at 1:12 PM, Josh Durgin <josh.durgin at inktank.com> wrote:
>>> On 03/20/2014 07:03 PM, Dmitry Borodaenko wrote:
>>>> 
>>>> On Thu, Mar 20, 2014 at 3:43 PM, Josh Durgin <josh.durgin at inktank.com>
>>>> wrote:
>>>>> 
>>>>> On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
>>>>>> 
>>>>>> The patch series that implemented clone operation for RBD backed
>>>>>> ephemeral volumes in Nova did not make it into Icehouse. We have tried
>>>>>> our best to help it land, but it was ultimately rejected. Furthermore,
>>>>>> an additional requirement was imposed to make this patch series
>>>>>> dependent on full support of Glance API v2 across Nova (due to its
>>>>>> dependency on direct_url that was introduced in v2).
>>>>>> 
>>>>>> You can find the most recent discussion of this patch series in the
>>>>>> FFE (feature freeze exception) thread on openstack-dev ML:
>>>>>> 
>>>>>> http://lists.openstack.org/pipermail/openstack-dev/2014-March/029127.html
>>>>>> 
>>>>>> As I explained in that thread, I believe this feature is essential for
>>>>>> using Ceph as a storage backend for Nova, so I'm going to try and keep
>>>>>> it alive outside of OpenStack mainline until it is allowed to land.
>>>>>> 
>>>>>> I have created rbd-ephemeral-clone branch in my nova repo fork on
>>>>>> GitHub:
>>>>>> https://github.com/angdraug/nova/tree/rbd-ephemeral-clone
>>>>>> 
>>>>>> I will keep it rebased over nova master, and will create an
>>>>>> rbd-ephemeral-clone-stable-icehouse to track the same patch series
>>>>>> over nova stable/icehouse once it's branched. I also plan to make sure
>>>>>> that this patch series is included in Mirantis OpenStack 5.0 which
>>>>>> will be based on Icehouse.
>>>>>> 
>>>>>> If you're interested in this feature, please review and test. Bug
>>>>>> reports and patches are welcome, as long as their scope is limited to
>>>>>> this patch series and is not applicable for mainline OpenStack.
>>>>> 
>>>>> 
>>>>> Thanks for taking this on Dmitry! Having rebased those patches many
>>>>> times during icehouse, I can tell you it's often not trivial.
>>>> 
>>>> 
>>>> Indeed, I get conflicts every day lately, even in the current
>>>> bugfixing stage of the OpenStack release cycle. I have a feeling it
>>>> will not get easier when Icehouse is out and Juno is in full swing.
>>>> 
>>>>> Do you think the imagehandler-based approach is best for Juno? I'm
>>>>> leaning towards the older way [1] for simplicity of review, and to
>>>>> avoid using glance's v2 api by default.
>>>>> [1] https://review.openstack.org/#/c/46879/
>>>> 
>>>> 
>>>> Excellent question, I have thought long and hard about this. In
>>>> retrospect, requiring this change to depend on the imagehandler patch
>>>> back in December 2013 proven to have been a poor decision.
>>>> Unfortunately, now that it's done, porting your original patch from
>>>> Havana to Icehouse is more work than keeping the new patch series up
>>>> to date with Icehouse, at least short term. Especially if we decide to
>>>> keep the rbd_utils refactoring, which I've grown to like.
>>>> 
>>>> As far as I understand, your original code made use of the same v2 api
>>>> call even before it was rebased over imagehandler patch:
>>>> 
>>>> https://github.com/jdurgin/nova/blob/8e4594123b65ddf47e682876373bca6171f4a6f5/nova/image/glance.py#L304
>>>> 
>>>> If I read this right, imagehandler doesn't create the dependency on v2
>>>> api, the only reason it caused a problem was because it exposed the
>>>> output of the same Glance API call to a code path that assumed a v1
>>>> data structure. If so, decoupling rbd clone patch from imagehandler
>>>> will not help lift the full Glance API v2 support requirement, that v2
>>>> api call will still be there.
>>>> 
>>>> Also, there's always a chance that imagehandler lands in Juno. If it
>>>> does, we'd be forced to dust off the imagehandler based patch series
>>>> again, and the effort spent on maintaining the old patch would be
>>>> wasted.
>>>> 
>>>> Given all that, and without making any assumptions about stability of
>>>> the imagehandler patch in its current state, I'm leaning towards
>>>> keeping it. If you think it's likely that it will cause us more
>>>> problems than the Glance API v2 issue, or if you disagree with my
>>>> analysis of that issue, please tell.
>>> 
>>> 
>>> My impression was that full glance v2 support was more of an issue
>>> with the imagehandler approach because it's used by default there,
>>> while the earlier approach only uses glance v2 when rbd is enabled.
>>> 
>>> 
>>>>> I doubt that full support for
>>>>> v2 will land very fast in nova, although I'd be happy to be proven wrong.
>>>> 
>>>> 
>>>> I'm sceptical about this, too. That's why right now my first priority
>>>> is making sure this patch is usable and stable with Icehouse.
>>>> Post-Icehouse, we'll have to see where glance v2 support in nova goes,
>>>> if anywhere at all. Not much point making plans when we can't even
>>>> tell if we'll have to rewrite this patch yet again for Juno.
>>> 
>>> 
>>> Sounds good. We can discuss more with nova folks once Juno opens,
>>> since we'll need to go through the new blueprint approval process
>>> anyway.
>>> 
>>> Josh
>> 
>> 
>> 
>> --
>> Dmitry Borodaenko
> 
> 
> 
> -- 
> Dmitry Borodaenko
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140528/cf566a7d/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux