Re: OCFS2 or GFS2 for cluster filesystem?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You are right, I do want a single RBD, formatted with a cluster file system, to use as a place for multiple VM image files to reside.

Doing everything straight from volumes would be more effective with regards to snapshots, using CoW etc. but unfortunately for now OpenStack nova insists on having an ephemeral disk and copying it to its local  /var/lib/nova/instances directory. If you want to be able to do live migrations and such you need to mount a cluster filesystem at that path on every host machine.

And that's what my questions were about!

Tom



2013/7/12 McNamara, Bradley <Bradley.McNamara@xxxxxxxxxxx>
Correct me if I'm wrong, I'm new to this, but I think the distinction between the two methods is that using 'qemu-img create -f rbd' creates an RBD for either a VM to boot from, or for mounting within a VM.  Whereas, the OP wants a single RBD, formatted with a cluster file system, to use as a place for multiple VM image files to reside.

I've often contemplated this same scenario, and would be quite interested in different ways people have implemented their VM infrastructure using RBD.  I guess one of the advantages of using 'qemu-img create -f rbd' is that a snapshot of a single RBD would capture just the changed RBD data for that VM, whereas a snapshot of a larger RBD with OCFS2 and multiple VM images on it, would capture changes of all the VM's, not just one.  It might provide more administrative agility to use the former.

Also, I guess another question would be, when a RBD is expanded, does the underlying VM that is created using 'qemu-img  create -f rbd' need to be rebooted to "see" the additional space.  My guess would be, yes.

Brad

-----Original Message-----
From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Alex Bligh
Sent: Thursday, July 11, 2013 2:03 PM
To: Gilles Mocellin
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: OCFS2 or GFS2 for cluster filesystem?


On 11 Jul 2013, at 19:25, Gilles Mocellin wrote:

> Hello,
>
> Yes, you missed that qemu can use directly RADOS volume.
> Look here :
> http://ceph.com/docs/master/rbd/qemu-rbd/
>
> Create :
> qemu-img create -f rbd rbd:data/squeeze 10G
>
> Use :
>
> qemu -m 1024 -drive format=raw,file=rbd:data/squeeze

I don't think he did. As I read it he wants his VMs to all access the same filing system, and doesn't want to use cephfs.

OCFS2 on RBD I suppose is a reasonable choice for that.

--
Alex Bligh




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux