Hi Alex,
We're planning to deploy OpenStack Grizzly using KVM. I agree that running every VM directly from RBD devices would be preferable, but booting from volumes is not one of OpenStack's strengths and configuring nova to make boot from volume the default method that works automatically is not really feasible yet.
So the alternative is to mount a shared filesystem on /var/lib/nova/instances of every compute node. Hence the RBD + OCFS2/GFS2 question.
Tom
p.s. yes I've read the rbd-openstack page which covers images and persistent volumes, not running instances which is what my question is about.
2013/7/12 Alex Bligh <alex@xxxxxxxxxxx>
Tom,
Out of interest, what are you using in your virtualization solution? Most things (including modern Xen) seem to use Qemu for the back end. If your virtualization solution does not use qemu as a back end, you can use kernel rbd devices straight which I think will give you better performance than OCFS2 on RBD devices.
On 11 Jul 2013, at 22:28, Tom Verdaat wrote:
> Actually I want my running VMs to all be stored on the same file system, so we can use live migration to move them between hosts.
>
> QEMU is not going to help because we're not using it in our virtualization solution.
--
A
>
> 2013/7/11 Alex Bligh <alex@xxxxxxxxxxx>
>
> On 11 Jul 2013, at 19:25, Gilles Mocellin wrote:
>
> > Hello,
> >
> > Yes, you missed that qemu can use directly RADOS volume.
> > Look here :
> > http://ceph.com/docs/master/rbd/qemu-rbd/
> >
> > Create :
> > qemu-img create -f rbd rbd:data/squeeze 10G
> >
> > Use :
> >
> > qemu -m 1024 -drive format=raw,file=rbd:data/squeeze
>
> I don't think he did. As I read it he wants his VMs to all access the same filing system, and doesn't want to use cephfs.
>
> OCFS2 on RBD I suppose is a reasonable choice for that.
>
> --
> Alex Bligh
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
Alex Bligh
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com