Re: kvm live migrate wil ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Michael,

Thanks for the reply.  It seems like ceph isn't actually "mounting" the rbd to the vm host which is where I think I was getting hung up (I had previously been attempting to mount rbds directly to multiple hosts and as you can imagine having issues).

Could you possible expound on why using a clustered filesystem approach is wrong (or conversely why using RBD's is the correct approach)?

As for format2 rbd images, it looks like they provide exactly the Copy-On-Write functionality that I am looking for.  Any caveats or things I should look out for when going from format 1 to format 2 images? (I think I read something about not being able to use both at the same time...)

Thanks Again,
Jon A


On Mon, Oct 14, 2013 at 4:42 PM, Michael Lowe <j.michael.lowe@xxxxxxxxx> wrote:
I live migrate all the time using the rbd driver in qemu, no problems.  Qemu will issue a flush as part of the migration so everything is consistent.  It's the right way to use ceph to back vm's. I would strongly recommend against a network file system approach.  You may want to look into format 2 rbd images, the cloning and writable snapshots may be what you are looking for.

Sent from my iPad

On Oct 14, 2013, at 5:37 AM, Jon <three18ti@xxxxxxxxx> wrote:

Hello,

I would like to live migrate a VM between two "hypervisors".  Is it possible to do this with a rbd disk or should the vm disks be created as qcow images on a CephFS/NFS share (is it possible to do clvm over rbds? OR GlusterFS over rbds?)and point kvm at the network directory.  As I understand it, rbds aren't "cluster aware" so you can't mount an rbd on multiple hosts at once, but maybe libvirt has a way to handle the transfer...?  I like the idea of "master" or "golden" images where guests write any changes to a new image, I don't think rbds are able to handle copy-on-write in the same way kvm does so maybe a clustered filesystem approach is the ideal way to go.

Thanks for your input. I think I'm just missing some piece. .. I just don't grok...

Bestv Regards,
Jon A

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux