Re: kvm live migrate wil ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I wouldn't go so far as to say putting a vm in a file on a networked filesystem is wrong.  It is just not the best choice if you have a ceph cluster at hand, in my opinion.  Networked filesystems have a bunch of extra stuff to implement posix semantics and live in kernel space.  You just need simple block device semantics and you don't need to entangle the hypervisor's kernel space.  What it boils down to is the engineering first principle of selecting the least complicated solution that satisfies the requirements of the problem. You don't get anything when you trade the simplicity of rbd for the complexity of a networked filesystem.

For format 2 I think the only caveat is that it requires newer clients and the kernel client takes some time to catch up to the user space clients.  You may not be able to mount filesystems on rbd devices with the kernel client depending on kernel version, this may or may not be important to you.  You can always use a vm to mount a filesystem on a rbd device as a work around.  

On Oct 16, 2013, at 9:11 AM, Jon <three18ti@xxxxxxxxx> wrote:

Hello Michael,

Thanks for the reply.  It seems like ceph isn't actually "mounting" the rbd to the vm host which is where I think I was getting hung up (I had previously been attempting to mount rbds directly to multiple hosts and as you can imagine having issues).

Could you possible expound on why using a clustered filesystem approach is wrong (or conversely why using RBD's is the correct approach)?

As for format2 rbd images, it looks like they provide exactly the Copy-On-Write functionality that I am looking for.  Any caveats or things I should look out for when going from format 1 to format 2 images? (I think I read something about not being able to use both at the same time...)

Thanks Again,
Jon A


On Mon, Oct 14, 2013 at 4:42 PM, Michael Lowe <j.michael.lowe@xxxxxxxxx> wrote:
I live migrate all the time using the rbd driver in qemu, no problems.  Qemu will issue a flush as part of the migration so everything is consistent.  It's the right way to use ceph to back vm's. I would strongly recommend against a network file system approach.  You may want to look into format 2 rbd images, the cloning and writable snapshots may be what you are looking for.

Sent from my iPad

On Oct 14, 2013, at 5:37 AM, Jon <three18ti@xxxxxxxxx> wrote:

Hello,

I would like to live migrate a VM between two "hypervisors".  Is it possible to do this with a rbd disk or should the vm disks be created as qcow images on a CephFS/NFS share (is it possible to do clvm over rbds? OR GlusterFS over rbds?)and point kvm at the network directory.  As I understand it, rbds aren't "cluster aware" so you can't mount an rbd on multiple hosts at once, but maybe libvirt has a way to handle the transfer...?  I like the idea of "master" or "golden" images where guests write any changes to a new image, I don't think rbds are able to handle copy-on-write in the same way kvm does so maybe a clustered filesystem approach is the ideal way to go.

Thanks for your input. I think I'm just missing some piece. .. I just don't grok...

Bestv Regards,
Jon A

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux