Re: Current answers for ceph as backend to parallel filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't think there is any inherent limitation to using RADOS or RBD
as a backend for an a non-CephFS file system, as CephFS is inherently
built on top of RADOS (though I suppose it doesn't directly use
librados). However, the challenge would be in configuring and tuning
the two independent systems to perform well together, each of which
may make certain assumptions that are incompatible with the other.

For instance, instead of configuring GPFS to assume RAID6 devices, one
might configure an RBD pool with no replication, and rely instead on
GPFS Native RAID for fault-tolerance and availbiilty. Likewise, if an
RBD device were to be treated as a RAID6 device from the GPFS point of
view, it would be a waste of effort for GPFS to do anything related to
the failure of the device which is already handled by Ceph.

On Tue, Dec 3, 2013 at 2:14 PM, JR <botemout@xxxxxxxxx> wrote:
> Greetings all,
>
> Does anyone have any recommendations for using ceph as a reliable,
> distributed backend for any existing parallel filesystems?  My final
> goal would be to have data reliability and availability handled by ceph
> and the serving of a filesystem handled by .. well, a distributed,
> parallel filesystem ;-)
>
> To make this question clear I'll spell out a scenario that I've used and
> ask about how ceph can fit it.
>
> GPFS:
> Servers running GPFS get their blocks from raid arrays over fiber
> channel. The raid arrays do RAID 6. GPFS, as well, replicates the
> metadata to guard against loss).
>
> The GPFS servers also run samba/ctdb to serve files to clients, i.e.,
>
>   \\files\path
>
> refers to a variety of physical servers (via round robin dns); if a
> server goes down the client is seemlessly directed to another server.
>
> GPFS+ceph:
> Servers run the ceph client software get blocks from ceph servers (e.g.,
> boxes with lots of disk running the osd, mon, mds, processes ...). Ceph
> replicates the data on the backend.  GPFS doesn't replicate either data
> or metadata.
>
> I haven't yet tried to use this approach since the intended use to which
> I wish to put this storage cluster (probably) doesn't allow GPFS.  I
> also have questions about the final performance given:
>   ceph io server -> xfs filesystem -> osd processes -> network -> gpfs
> server processes exported ceph blocks, etc....
>
> However, there are other parallel filesystems which might do, e.g., GFS,
> gluster, others?
>
> Am I on the right track in thinking about ceph being usable in this
> scenario, or is ceph really better suited to being an object store and a
> provider of blocks for virtual machines?
>
> Also, how will the ceph filesystem help with the above problem when it
> becomes available (if it will)?
>
> Thanks much for your time,
> JR
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux