Re: Migrating to new pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 16, 2018 at 11:20 AM, Eugen Block <eblock@xxxxxx> wrote:
> Hi Jason,
>
>> ... also forgot to mention "rbd export --export-format 2" / "rbd
>> import --export-format 2" that will also deeply export/import all
>> snapshots associated with an image and that feature is available in
>> the Luminous release.
>
>
> thanks for that information, this could be very valuable for us. I'll have
> to test that intesively, but not before next week.
>
> But a first quick test brought up a couple of issues which I'll have to
> re-check before bringing them up here.
>
> One issue is worth mentioning, though: After I exported (rbd export
> --export-format ...) a glance image and imported it back to a different pool
> (rbd import --export-format ...) its snapshot was copied, but not protected.
> This prevented nova from cloning the base image and leaving that instance in
> error state. Protecting the snapshot manually and launch another instance
> enabled nova to clone the image successfully.
>
> Could this be worth a bug report or is it rather something I did wrong or
> missed?

Definitely deserves a bug tracker ticket opened. Thanks.

> I wish you all a nice weekend!
>
> Regards
> Eugen
>
>
> Zitat von Jason Dillaman <jdillama@xxxxxxxxxx>:
>
>> On Fri, Feb 16, 2018 at 8:08 AM, Jason Dillaman <jdillama@xxxxxxxxxx>
>> wrote:
>>>
>>> On Fri, Feb 16, 2018 at 5:36 AM, Jens-U. Mozdzen <jmozdzen@xxxxxx> wrote:
>>>>
>>>> Dear list, hello Jason,
>>>>
>>>> you may have seen my message on the Ceph mailing list about RDB pool
>>>> migration - it's a common subject that pools were created in a
>>>> sub-optimum
>>>> fashion and i. e. pgnum is (not yet) reducible, so we're looking into
>>>> means
>>>> to "clone" an RBD pool into a new pool within the same cluster
>>>> (including
>>>> snapshots).
>>>>
>>>> We had looked into creating a tool for this job, but soon noticed that
>>>> we're
>>>> duplicating basic functionality of rbd-mirror. So we tested the
>>>> following,
>>>> which worked out nicely:
>>>>
>>>> - create a test cluster (Ceph cluster plus an Openstack cluster using an
>>>> RBD
>>>> pool) and some Openstack instances
>>>>
>>>> - create a second Ceph test cluster
>>>>
>>>> - stop Openstack
>>>>
>>>> - use rbd-mirror to clone the RBD pool from the first to the second Ceph
>>>> cluster (IOW aborting rbd-mirror once the initial coping was done)
>>>>
>>>> - recreate the RDB pool on the first cluster
>>>>
>>>> - use rbd-mirror to clone the mirrored pool back to the (newly created)
>>>> pool
>>>> on the first cluster
>>>>
>>>> - start Openstack and work with the (recreated) pool on the first
>>>> cluster
>>>>
>>>> So using rbd-mirror, we could clone an RBD pool's content to a
>>>> differently
>>>> structured pool on the same cluster - by using an intermediate cluster.
>>>>
>>>> @Jason: Looking at the commit history for rbd-mirror, it seems you might
>>>> be
>>>> able to shed some light on this: Do you see an easy way to modify
>>>> rbd-mirror
>>>> in such a fashion that instead of mirroring to a pool on a different
>>>> cluster
>>>> (having the same pool name as the original), mirroring would be to a
>>>> pool on
>>>> the *same* cluster, (obviously having a pool different name)?
>>>>
>>>> From the "rbd cppool" perspective, a one-shot mode of operation would be
>>>> fully sufficient - but looking at the code, I have not even been able to
>>>> identify the spots where we might "cut away" the networking part, so
>>>> that
>>>> rbd-mirror might do an intra-cluster job.
>>>>
>>>> Are you able to judge how much work would need to be done, in order to
>>>> create a one-shot, intra-cluster version of rbd-mirror? Might it even be
>>>> something that could be a simple enhancement?
>>>
>>>
>>> You might be interested in the deep-copy feature that will be included
>>> in the Mimic release. By running "rbd deep-copy <src-image>
>>> <dst-image>", it will fully copy the image, including snapshots and
>>> parentage, to a new image. There is also work-in-progress for online
>>> image migration [1] that will allow you to keep using the image while
>>> it's being migrated to a new destination image. Both of these are
>>> probably more suited to your needs than the heavy-weight RBD mirroring
>>> process -- especially if you are only interested in the first step
>>> since RBD mirroring now directly utilizes the deep-copy feature for
>>> the initial image sync.
>>
>>
>> ... also forgot to mention "rbd export --export-format 2" / "rbd
>> import --export-format 2" that will also deeply export/import all
>> snapshots associated with an image and that feature is available in
>> the Luminous release.
>>
>>>> Thank you for any information and / or opinion you care to share!
>>>>
>>>> With regards,
>>>> Jens
>>>>
>>>
>>> [1] https://github.com/ceph/ceph/pull/15831
>>>
>>> --
>>> Jason
>>
>>
>>
>>
>> --
>> Jason
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> Eugen Block                             voice   : +49-40-559 51 75
> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
> Postfach 61 03 15
> D-22423 Hamburg                         e-mail  : eblock@xxxxxx
>
>         Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>           Sitz und Registergericht: Hamburg, HRB 90934
>                   Vorstand: Jens-U. Mozdzen
>                    USt-IdNr. DE 814 013 983
>



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux