Re: CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



How did you find the fuse client performed?

I'm more interested in the fuse client because I'd like to use CephFS
for shared volumes, and my understanding of the kernel client is that it
uses the volume as a block device.

Cheers,
Kingsley.

On Tue, 2017-01-17 at 11:46 +0000, Sean Redmond wrote:
> I found the kernel clients to perform better in my case. 
> 
> 
> I ran into a couple of issues with some metadata pool corruption and
> omap inconsistencies. That said the repair tools are useful and
> managed to get things back up and running. 
> 
> 
> The community has been very responsive to any issues I have ran into,
> this really increases my confidence levels in any open source
> project. 
> 
> On Tue, Jan 17, 2017 at 6:39 AM, wido@xxxxxxxx <wido@xxxxxxxx> wrote:
>         
>         
>         
>         
>         Op 17 jan. 2017 om 03:47 heeft Tu Holmes <tu.holmes@xxxxxxxxx>
>         het volgende geschreven:
>         
>         
>         > I could use either one. I'm just trying to get a feel for
>         > how stable the technology is in general. 
>         > 
>         
>         
>         Stable. Multiple customers of me run it in production with the
>         kernel client and serious load on it. No major problems.
>         
>         
>         Wido
>         
>         > On Mon, Jan 16, 2017 at 3:19 PM Sean Redmond
>         > <sean.redmond1@xxxxxxxxx> wrote:
>         > 
>         >         What's your use case? Do you plan on using kernel or
>         >         fuse clients? 
>         >         
>         >         
>         >         On 16 Jan 2017 23:03, "Tu Holmes"
>         >         <tu.holmes@xxxxxxxxx> wrote:
>         >         
>         >                 So what's the consensus on CephFS?
>         >                 
>         >                 
>         >                 Is it ready for prime time or not?
>         >                 
>         >                 
>         >                 //Tu
>         >                 
>         >                 _______________________________________________
>         >                 ceph-users mailing list
>         >                 ceph-users@xxxxxxxxxxxxxx
>         >                 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>         >                 
>         > _______________________________________________
>         > ceph-users mailing list
>         > ceph-users@xxxxxxxxxxxxxx
>         > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>         > 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux