Re: Migrating to new pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 20, 2018 at 8:35 PM, Rafael Lopez <rafael.lopez@xxxxxxxxxx> wrote:
>> There is also work-in-progress for online
>> image migration [1] that will allow you to keep using the image while
>> it's being migrated to a new destination image.
>
>
> Hi Jason,
>
> Is there any recommended method/workaround for live rbd migration in
> luminous? eg. snapshot/copy or export/import then export/import-diff?
> We are looking at options for moving large RBDs (100T) to a new pool with
> minimal downtime.
>
> I was thinking we might be able to configure/hack rbd mirroring to mirror to
> a pool on the same cluster but I gather from the OP and your post that this
> is not really possible?

No, it's not really possible currently and we have no plans to add
such support since it would not be of any long-term value. If you are
using RBD with a kernel block device, you could temporarily wrap two
mapped RBD volumes (original and new) under a md RAID1 with the
original as the primary -- and then do a RAID repair. If using
QEMU+librbd, you could use its built-in block live migration feature
(I've never played w/ this before and I believe you would need to use
the QEMU monitor instead of libvirt to configure).

> --
> Rafael Lopez
> Research Devops Engineer
> Monash University eResearch Centre
>
>



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux