Re: OCFS2 or GFS2 for cluster filesystem?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tom,
I'm no expert as I didn't set it up, but we are using Openstack Grizzly with KVM/QEMU and RBD volumes for VM's.
We boot the VMs from the RBD volumes and it all seems to work just fine.
Migration works perfectly, although live - no break migration only works from the command line tools. The GUI uses the pause, migrate then un-pause mode.
Layered snapshot/cloning works just fine through the GUI. I would say Grizzly has pretty good integration with CEPH.

Regards
Darryl

On 07/12/13 09:41, Tom Verdaat wrote:
Hi Alex,

We're planning to deploy OpenStack Grizzly using KVM. I agree that running every VM directly from RBD devices would be preferable, but booting from volumes is not one of OpenStack's strengths and configuring nova to make boot from volume the default method that works automatically is not really feasible yet.

So the alternative is to mount a shared filesystem on /var/lib/nova/instances of every compute node. Hence the RBD + OCFS2/GFS2 question.

Tom

p.s. yes I've read the rbd-openstack page which covers images and persistent volumes, not running instances which is what my question is about.


2013/7/12 Alex Bligh <alex@xxxxxxxxxxx>
Tom,

On 11 Jul 2013, at 22:28, Tom Verdaat wrote:

> Actually I want my running VMs to all be stored on the same file system, so we can use live migration to move them between hosts.
>
> QEMU is not going to help because we're not using it in our virtualization solution.

Out of interest, what are you using in your virtualization solution? Most things (including modern Xen) seem to use Qemu for the back end. If your virtualization solution does not use qemu as a back end, you can use kernel rbd devices straight which I think will give you better performance than OCFS2 on RBD devices.

A

>
> 2013/7/11 Alex Bligh <alex@xxxxxxxxxxx>
>
> On 11 Jul 2013, at 19:25, Gilles Mocellin wrote:
>
> > Hello,
> >
> > Yes, you missed that qemu can use directly RADOS volume.
> > Look here :
> > http://ceph.com/docs/master/rbd/qemu-rbd/
> >
> > Create :
> > qemu-img create -f rbd rbd:data/squeeze 10G
> >
> > Use :
> >
> > qemu -m 1024 -drive format=raw,file=rbd:data/squeeze
>
> I don't think he did. As I read it he wants his VMs to all access the same filing system, and doesn't want to use cephfs.
>
> OCFS2 on RBD I suppose is a reasonable choice for that.
>
> --
> Alex Bligh
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

--
Alex Bligh








The contents of this electronic message and any attachments are intended only for the addressee and may contain legally privileged, personal, sensitive or confidential information. If you are not the intended addressee, and have received this email, any transmission, distribution, downloading, printing or photocopying of the contents of this message or attachments is strictly prohibited. Any legal privilege or confidentiality attached to this message and attachments is not waived, lost or destroyed by reason of delivery to any person other than intended addressee. If you have received this message and are not the intended addressee you should notify the sender by return email and destroy all copies of the message and any attachments. Unless expressly attributed, the views expressed in this email do not necessarily represent the views of the company.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux