Re: Live migrate RBD image with a client using it

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 4/12/23 19:09, Work Ceph wrote:
Exactly, I have seen that. However, that also means that it is not a
"process" then, right? Am I missing something?

If we need a live process, where the clients cannot unmpa the volumes, what
do you guys recommend?


We have performed a "live migration" between two ceph cluster in our openstack setup in the past. Cinder and glance were configured with ceph pools as data backend.


It might work for other virtualization solution, too. The blocker is usually the fact that the virtual machines keeps a lock on the rbd image by keeping its header object open. We solved this problem by performing a live migration to a different host.

The necessary steps are:

1. setup rbd mirroring for source and target (image based mirroring, not the complete pool)

for each virtual machine:

2. enable mirroring for all rbd volumes used by the VM

3. wait until mirroring is complete and data is synced

4. pause the virtual machine (prevents further writes to the rbds!)

5. in case of openstack update the cinder database with the new ceph cluster information (yes, cinder database contains the mon ip address for all images...)

6. demote rbd image on old pools and promote it on new pool

7. live migrate paused instance to another host

8. resume instance

9. after successful migration move old image to trash


This will not work for kernel mapped rbd images. And given the complexity of the process I would advise to perform a cold migration if possible. I've written a perl script to perform the per image steps, not sure whether it will still work as expected. There's also a short period between steps 4 and 8 without an active virtual machine.


Another solution for libvirt based virtual machines is block migration within qemu itself. Proxmox is using this for storage migrations. Works quite well within proxmox (btw thx for the great software), but I haven't done it manually yet. YMMV.


Best regards,

Burkhard Linke

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux