Re: Migrating to new pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 26, 2018 at 9:56 AM, Eugen Block <eblock@xxxxxx> wrote:
> I'm following up on the rbd export/import option with a little delay.
>
> The fact that the snapshot is not protected after the image is reimported is
> not a big problem, you could deal with that or wait for a fix.
> But there's one major problem using this method: the VMs lose their
> rbd_children and parent data!

Correct -- the images are "flattened". The data is consistent but you
lose any savings from the re-use of non-overwritten parent data.

> Although the imported VM launches successfully, it has no parent
> information. So this will eventually lead to a problem reading data from the
> parent image, I assume.
>
> This brings up another issue: deleting glance images is now easily possible
> since the image has no clones. And if the VM loses its base image it
> probably will run into a failed state.
>
> ---cut here---
> # New glance image with new VM
> root@control:~ # rbd children
> glance/4479820a-d58b-4ac6-ba2a-e8871b24fcb1@snap
> cinder/f43265e9-beab-4f83-be46-a51da013f70a_disk
>
> # Parent data available
> root@control:~ # rbd info cinder/f43265e9-beab-4f83-be46-a51da013f70a_disk |
> grep parent
>         parent: glance/4479820a-d58b-4ac6-ba2a-e8871b24fcb1@snap
>
>
> # Export base image
> root@control:~ # rbd export --export-format 2
> glance/4479820a-d58b-4ac6-ba2a-e8871b24fcb1 /var/lib/glance/images/cirros
> Exporting image: 100% complete...done.
>
> # Export VM's disk
> root@control:~ # rbd export --export-format 2
> cinder/f43265e9-beab-4f83-be46-a51da013f70a_disk
> /var/lib/glance/images/cirros_disk
> Exporting image: 100% complete...done.
>
>
> # Delete VM
> root@control:~ # rbd rm cinder/f43265e9-beab-4f83-be46-a51da013f70a_disk
> Removing image: 100% complete...done.
>
> # Reimport VM's disk
> root@control:~ # rbd import --export-format 2
> /var/lib/glance/images/cirros_disk
> cinder/f43265e9-beab-4f83-be46-a51da013f70a_disk
> Importing image: 100% complete...done.
>
>
> # Delete glance image
> root@control:~ # rbd snap unprotect
> glance/4479820a-d58b-4ac6-ba2a-e8871b24fcb1@snap
> root@control:~ # rbd snap purge glance/4479820a-d58b-4ac6-ba2a-e8871b24fcb1
> Removing all snapshots: 100% complete...done.
> root@control:~ # rbd rm glance/4479820a-d58b-4ac6-ba2a-e8871b24fcb1
> Removing image: 100% complete...done.
>
> # Reimport glance image
> root@control:~ # rbd import --export-format 2 /var/lib/glance/images/cirros
> glance/4479820a-d58b-4ac6-ba2a-e8871b24fcb1
> Importing image: 100% complete...done.
>
> root@control:~ # rbd snap protect
> glance/4479820a-d58b-4ac6-ba2a-e8871b24fcb1@snap
>
> # There are no children
> root@control:~ # rbd children
> glance/4479820a-d58b-4ac6-ba2a-e8871b24fcb1@snap
> root@control:~ #
>
> # VM starts successfully
> root@control:~ # nova start c1
> Request to start server c1 has been accepted.
>
> # But no data in rbd_children
> root@control:~ # rados -p cinder listomapvals rbd_children
> root@control:~ #
> ---cut here---
>
> So in conclusion, this method is not suited for OpenStack. You could
> probably consider it in case of desaster recovery for single VMs, but not
> for a whole cloud environment where you would lose all relationships between
> base images and their clones.
>
> Regards,
> Eugen
>
>
> Zitat von Eugen Block <eblock@xxxxxx>:
>
>
>> Hi,
>>
>> I created a ticket for the rbd import issue:
>>
>> https://tracker.ceph.com/issues/23038
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von Jason Dillaman <jdillama@xxxxxxxxxx>:
>>
>>> On Fri, Feb 16, 2018 at 11:20 AM, Eugen Block <eblock@xxxxxx> wrote:
>>>>
>>>> Hi Jason,
>>>>
>>>>> ... also forgot to mention "rbd export --export-format 2" / "rbd
>>>>> import --export-format 2" that will also deeply export/import all
>>>>> snapshots associated with an image and that feature is available in
>>>>> the Luminous release.
>>>>
>>>>
>>>>
>>>> thanks for that information, this could be very valuable for us. I'll
>>>> have
>>>> to test that intesively, but not before next week.
>>>>
>>>> But a first quick test brought up a couple of issues which I'll have to
>>>> re-check before bringing them up here.
>>>>
>>>> One issue is worth mentioning, though: After I exported (rbd export
>>>> --export-format ...) a glance image and imported it back to a different
>>>> pool
>>>> (rbd import --export-format ...) its snapshot was copied, but not
>>>> protected.
>>>> This prevented nova from cloning the base image and leaving that
>>>> instance in
>>>> error state. Protecting the snapshot manually and launch another
>>>> instance
>>>> enabled nova to clone the image successfully.
>>>>
>>>> Could this be worth a bug report or is it rather something I did wrong
>>>> or
>>>> missed?
>>>
>>>
>>> Definitely deserves a bug tracker ticket opened. Thanks.
>>>
>>>> I wish you all a nice weekend!
>>>>
>>>> Regards
>>>> Eugen
>>>>
>>>>
>>>> Zitat von Jason Dillaman <jdillama@xxxxxxxxxx>:
>>>>
>>>>> On Fri, Feb 16, 2018 at 8:08 AM, Jason Dillaman <jdillama@xxxxxxxxxx>
>>>>> wrote:
>>>>>>
>>>>>>
>>>>>> On Fri, Feb 16, 2018 at 5:36 AM, Jens-U. Mozdzen <jmozdzen@xxxxxx>
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>> Dear list, hello Jason,
>>>>>>>
>>>>>>> you may have seen my message on the Ceph mailing list about RDB pool
>>>>>>> migration - it's a common subject that pools were created in a
>>>>>>> sub-optimum
>>>>>>> fashion and i. e. pgnum is (not yet) reducible, so we're looking into
>>>>>>> means
>>>>>>> to "clone" an RBD pool into a new pool within the same cluster
>>>>>>> (including
>>>>>>> snapshots).
>>>>>>>
>>>>>>> We had looked into creating a tool for this job, but soon noticed
>>>>>>> that
>>>>>>> we're
>>>>>>> duplicating basic functionality of rbd-mirror. So we tested the
>>>>>>> following,
>>>>>>> which worked out nicely:
>>>>>>>
>>>>>>> - create a test cluster (Ceph cluster plus an Openstack cluster using
>>>>>>> an
>>>>>>> RBD
>>>>>>> pool) and some Openstack instances
>>>>>>>
>>>>>>> - create a second Ceph test cluster
>>>>>>>
>>>>>>> - stop Openstack
>>>>>>>
>>>>>>> - use rbd-mirror to clone the RBD pool from the first to the second
>>>>>>> Ceph
>>>>>>> cluster (IOW aborting rbd-mirror once the initial coping was done)
>>>>>>>
>>>>>>> - recreate the RDB pool on the first cluster
>>>>>>>
>>>>>>> - use rbd-mirror to clone the mirrored pool back to the (newly
>>>>>>> created)
>>>>>>> pool
>>>>>>> on the first cluster
>>>>>>>
>>>>>>> - start Openstack and work with the (recreated) pool on the first
>>>>>>> cluster
>>>>>>>
>>>>>>> So using rbd-mirror, we could clone an RBD pool's content to a
>>>>>>> differently
>>>>>>> structured pool on the same cluster - by using an intermediate
>>>>>>> cluster.
>>>>>>>
>>>>>>> @Jason: Looking at the commit history for rbd-mirror, it seems you
>>>>>>> might
>>>>>>> be
>>>>>>> able to shed some light on this: Do you see an easy way to modify
>>>>>>> rbd-mirror
>>>>>>> in such a fashion that instead of mirroring to a pool on a different
>>>>>>> cluster
>>>>>>> (having the same pool name as the original), mirroring would be to a
>>>>>>> pool on
>>>>>>> the *same* cluster, (obviously having a pool different name)?
>>>>>>>
>>>>>>> From the "rbd cppool" perspective, a one-shot mode of operation would
>>>>>>> be
>>>>>>> fully sufficient - but looking at the code, I have not even been able
>>>>>>> to
>>>>>>> identify the spots where we might "cut away" the networking part, so
>>>>>>> that
>>>>>>> rbd-mirror might do an intra-cluster job.
>>>>>>>
>>>>>>> Are you able to judge how much work would need to be done, in order
>>>>>>> to
>>>>>>> create a one-shot, intra-cluster version of rbd-mirror? Might it even
>>>>>>> be
>>>>>>> something that could be a simple enhancement?
>>>>>>
>>>>>>
>>>>>>
>>>>>> You might be interested in the deep-copy feature that will be included
>>>>>> in the Mimic release. By running "rbd deep-copy <src-image>
>>>>>> <dst-image>", it will fully copy the image, including snapshots and
>>>>>> parentage, to a new image. There is also work-in-progress for online
>>>>>> image migration [1] that will allow you to keep using the image while
>>>>>> it's being migrated to a new destination image. Both of these are
>>>>>> probably more suited to your needs than the heavy-weight RBD mirroring
>>>>>> process -- especially if you are only interested in the first step
>>>>>> since RBD mirroring now directly utilizes the deep-copy feature for
>>>>>> the initial image sync.
>>>>>
>>>>>
>>>>>
>>>>> ... also forgot to mention "rbd export --export-format 2" / "rbd
>>>>> import --export-format 2" that will also deeply export/import all
>>>>> snapshots associated with an image and that feature is available in
>>>>> the Luminous release.
>>>>>
>>>>>>> Thank you for any information and / or opinion you care to share!
>>>>>>>
>>>>>>> With regards,
>>>>>>> Jens
>>>>>>>
>>>>>>
>>>>>> [1] https://github.com/ceph/ceph/pull/15831
>>>>>>
>>>>>> --
>>>>>> Jason
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Jason
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Eugen Block                             voice   : +49-40-559 51 75
>>>> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
>>>> Postfach 61 03 15
>>>> D-22423 Hamburg                         e-mail  : eblock@xxxxxx
>>>>
>>>>        Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>>>>          Sitz und Registergericht: Hamburg, HRB 90934
>>>>                  Vorstand: Jens-U. Mozdzen
>>>>                   USt-IdNr. DE 814 013 983
>>>>
>>>
>>>
>>>
>>> --
>>> Jason
>>
>>
>>
>>
>> --
>> Eugen Block                             voice   : +49-40-559 51 75
>> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
>> Postfach 61 03 15
>> D-22423 Hamburg                         e-mail  : eblock@xxxxxx
>>
>>         Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>>           Sitz und Registergericht: Hamburg, HRB 90934
>>                   Vorstand: Jens-U. Mozdzen
>>                    USt-IdNr. DE 814 013 983
>
>
>
>
> --
> Eugen Block                             voice   : +49-40-559 51 75
> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
> Postfach 61 03 15
> D-22423 Hamburg                         e-mail  : eblock@xxxxxx
>
>         Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>           Sitz und Registergericht: Hamburg, HRB 90934
>                   Vorstand: Jens-U. Mozdzen
>                    USt-IdNr. DE 814 013 983
>



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux