Re: OCFS2 or GFS2 for cluster filesystem?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 11/07/2013 12:08, Tom Verdaat a écrit :
Hi guys,

We want to use our Ceph cluster to create a shared disk file system to host VM's. Our preference would be to use CephFS but since it is not considered stable I'm looking into alternatives.

The most appealing alternative seems to be to create a RBD volume, format it with a cluster file system and mount it on all the VM host machines.

Obvious file system candidates would be OCFS2 and GFS2 but I'm having trouble finding recent and reliable documentation on the performance, features and reliability of these file systems, especially related to our specific use case. The specifics I'm trying to keep in mind are:

  * Using it to host VM ephemeral disks means the file system needs to
    perform well with few but very large files and usually machines
    don't try to compete for access to the same file, except for
    during live migration.
  * Needs to handle scale well (large number of nodes, manage a volume
    of tens of terabytes and file sizes of tens or hundreds of
    gigabytes) and handle online operations like increasing the volume
    size.
  * Since the cluster FS is already running on a distributed storage
    system (Ceph), the file system does not need to concern itself
    with things like replication. Just needs to not get corrupted and
    be fast of course.


Anybody here that can help me shed some light on the following questions:

 1. Are there other cluster file systems to consider besides OCFS2 and
    GFS2?
 2. Which one would yield the best performance for our use case?
 3. Is anybody doing this already and willing to share their experience?
 4. Is there anything important that you think we might have missed?


Hello,

Yes, you missed that qemu can use directly RADOS volume.
Look here :
http://ceph.com/docs/master/rbd/qemu-rbd/

Create :
qemu-img create -f rbd rbd:data/squeeze 10G

Use :

qemu -m 1024 -drive format=raw,file=rbd:data/squeeze


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux