Re: CEPH_FSAL Nfs-ganesha

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We've found that more aggressive prefetching in the Ceph client can
help with some poorly behaving legacy applications (don't know the
option off the top of my head but it's documented).
It can also be useful to disable logging (even the in-memory logs) if
you do a lot IOPS (that's debug client and debug ms mostly).

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Mon, Jan 14, 2019 at 4:11 PM Daniel Gryniewicz <dang@xxxxxxxxxx> wrote:
>
> Hi.  Welcome to the community.
>
> On 01/14/2019 07:56 AM, David C wrote:
> > Hi All
> >
> > I've been playing around with the nfs-ganesha 2.7 exporting a cephfs
> > filesystem, it seems to be working pretty well so far. A few questions:
> >
> > 1) The docs say " For each NFS-Ganesha export, FSAL_CEPH uses a
> > libcephfs client,..." [1]. For arguments sake, if I have ten top level
> > dirs in my Cephfs namespace, is there any value in creating a separate
> > export for each directory? Will that potentially give me better
> > performance than a single export of the entire namespace?
>
> I don't believe there are any advantages from the Ceph side.  From the
> Ganesha side, you configure permissions, client ACLs, squashing, and so
> on on a per-export basis, so you'll need different exports if you need
> different settings for each top level directory.  If they can all use
> the same settings, one export is probably better.
>
> >
> > 2) Tuning: are there any recommended parameters to tune? So far I've
> > found I had to increase client_oc_size which seemed quite conservative.
>
> Ganesha is just a standard libcephfs client, so any tuning you'd make on
> any other cephfs client also applies to Ganesha.  I'm not aware of
> anything in particular, but I've never deployed it for anything other
> than testing.
>
> Daniel
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux