I will love to know the different between "ceph-deploy new host" and "ceph-deploy new mon"? I will appreaciate your help Sent from my LG Mobile "McNamara, Bradley" <Bradley.McNamara@xxxxxxxxxxx> wrote: Correct me if I'm wrong, I'm new to this, but I think the distinction between the two methods is that using 'qemu-img create -f rbd' creates an RBD for either a VM to boot from, or for mounting within a VM. Whereas, the OP wants a single RBD, formatted with a cluster file system, to use as a place for multiple VM image files to reside. I've often contemplated this same scenario, and would be quite interested in different ways people have implemented their VM infrastructure using RBD. I guess one of the advantages of using 'qemu-img create -f rbd' is that a snapshot of a single RBD would capture just the changed RBD data for that VM, whereas a snapshot of a larger RBD with OCFS2 and multiple VM images on it, would capture changes of all the VM's, not just one. It might provide more administrative agility to use the former. Also, I guess another question would be, when a RBD is expanded, does the underlying VM that is created using 'qemu-img create -f rbd' need to be rebooted to "see" the additional space. My guess would be, yes. Brad -----Original Message----- From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Alex Bligh Sent: Thursday, July 11, 2013 2:03 PM To: Gilles Mocellin Cc: ceph-users@xxxxxxxxxxxxxx Subject: Re: OCFS2 or GFS2 for cluster filesystem? On 11 Jul 2013, at 19:25, Gilles Mocellin wrote: > Hello, > > Yes, you missed that qemu can use directly RADOS volume. > Look here : > http://ceph.com/docs/master/rbd/qemu-rbd/ > > Create : > qemu-img create -f rbd rbd:data/squeeze 10G > > Use : > > qemu -m 1024 -drive format=raw,file=rbd:data/squeeze I don't think he did. As I read it he wants his VMs to all access the same filing system, and doesn't want to use cephfs. OCFS2 on RBD I suppose is a reasonable choice for that. -- Alex Bligh _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com