Re: Live migrate RBD image with a client using it

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've used a similar process with great success for capacity management -- moving volumes from very full clusters to ones with more free space.  There was a weighting system to direct new volumes where there was space, but, to forestall full ratio problems due to organic growth of existing thin-provisioned volumes sometimes explicit moving was needed.

Unattached volumes were the low-hanging fruit, they could be moved at will, ideally with a lock preventing attachment during the process.

This was libvirt-based, but before QEMU had its own migration.  I'd power off the VM where the below pauses and migrates, so when it powered back up it got the new attachment.  We had a service wrapped around rbd-mirror that handled the state transitions and queueing.  I did at most two volumes in parallel, and had to tweak rbd-mirror options to prevent an hours-long catch-up before the primary/secondary could be swapped.  This was on Luminous.

Of course, that isn't live migration, but it's some context.


> 
> We have performed a "live migration" between two ceph cluster in our openstack setup in the past. Cinder and glance were configured with ceph pools as data backend.
> 
> 
> It might work for other virtualization solution, too. The blocker is usually the fact that the virtual machines keeps a lock on the rbd image by keeping its header object open. We solved this problem by performing a live migration to a different host.
> 
> The necessary steps are:
> 
> 1. setup rbd mirroring for source and target (image based mirroring, not the complete pool)
> 
> for each virtual machine:
> 
> 2. enable mirroring for all rbd volumes used by the VM
> 
> 3. wait until mirroring is complete and data is synced
> 
> 4. pause the virtual machine (prevents further writes to the rbds!)
> 
> 5. in case of openstack update the cinder database with the new ceph cluster information (yes, cinder database contains the mon ip address for all images...)
> 
> 6. demote rbd image on old pools and promote it on new pool
> 
> 7. live migrate paused instance to another host
> 
> 8. resume instance
> 
> 9. after successful migration move old image to trash
> 
> 
> This will not work for kernel mapped rbd images. And given the complexity of the process I would advise to perform a cold migration if possible. I've written a perl script to perform the per image steps, not sure whether it will still work as expected. There's also a short period between steps 4 and 8 without an active virtual machine.
> 
> 
> Another solution for libvirt based virtual machines is block migration within qemu itself. Proxmox is using this for storage migrations. Works quite well within proxmox (btw thx for the great software), but I haven't done it manually yet. YMMV.
> 
> 
> Best regards,
> 
> Burkhard Linke
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux