Re: CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Le 17/01/2017 à 13:38, Kingsley Tart a écrit :
How did you find the fuse client performed?

I'm more interested in the fuse client because I'd like to use CephFS
for shared volumes, and my understanding of the kernel client is that it
uses the volume as a block device.

I think you're confusing CephFS kernel client and RBD kernel client.

The Linux kernel contains both:

* a module ceph.ko for accessing a CephFS
* a module rbd.ko for accessing an RBD (Rados Block Device)

You can mount a CephFS using the kernel driver [0], or using an userspace helper for FUSE [1].

[0] http://docs.ceph.com/docs/master/cephfs/kernel/
[1] http://docs.ceph.com/docs/master/cephfs/fuse/

--
Loris


Cheers,
Kingsley.

On Tue, 2017-01-17 at 11:46 +0000, Sean Redmond wrote:
I found the kernel clients to perform better in my case.


I ran into a couple of issues with some metadata pool corruption and
omap inconsistencies. That said the repair tools are useful and
managed to get things back up and running.


The community has been very responsive to any issues I have ran into,
this really increases my confidence levels in any open source
project.

On Tue, Jan 17, 2017 at 6:39 AM, wido@xxxxxxxx <wido@xxxxxxxx> wrote:




        Op 17 jan. 2017 om 03:47 heeft Tu Holmes <tu.holmes@xxxxxxxxx>
        het volgende geschreven:


        > I could use either one. I'm just trying to get a feel for
        > how stable the technology is in general.
        >


        Stable. Multiple customers of me run it in production with the
        kernel client and serious load on it. No major problems.


        Wido

        > On Mon, Jan 16, 2017 at 3:19 PM Sean Redmond
        > <sean.redmond1@xxxxxxxxx> wrote:
        >
        >         What's your use case? Do you plan on using kernel or
        >         fuse clients?
        >
        >
        >         On 16 Jan 2017 23:03, "Tu Holmes"
        >         <tu.holmes@xxxxxxxxx> wrote:
        >
        >                 So what's the consensus on CephFS?
        >
        >
        >                 Is it ready for prime time or not?
        >
        >
        >                 //Tu
        >
        >                 _______________________________________________
        >                 ceph-users mailing list
        >                 ceph-users@xxxxxxxxxxxxxx
        >                 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
        >
        > _______________________________________________
        > ceph-users mailing list
        > ceph-users@xxxxxxxxxxxxxx
        > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
        >


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux