Re: CEPH_FSAL Nfs-ganesha

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick

Thanks for the info. If I did multiple exports, how does that work in terms of the cache settings defined in ceph.conf, are those settings per CephFS client or a shared cache? I.e if I've definied client_oc_size, would that be per export?

Cheers,

On Tue, Jan 15, 2019 at 6:47 PM Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote:
On Mon, Jan 14, 2019 at 7:11 AM Daniel Gryniewicz <dang@xxxxxxxxxx> wrote:
>
> Hi.  Welcome to the community.
>
> On 01/14/2019 07:56 AM, David C wrote:
> > Hi All
> >
> > I've been playing around with the nfs-ganesha 2.7 exporting a cephfs
> > filesystem, it seems to be working pretty well so far. A few questions:
> >
> > 1) The docs say " For each NFS-Ganesha export, FSAL_CEPH uses a
> > libcephfs client,..." [1]. For arguments sake, if I have ten top level
> > dirs in my Cephfs namespace, is there any value in creating a separate
> > export for each directory? Will that potentially give me better
> > performance than a single export of the entire namespace?
>
> I don't believe there are any advantages from the Ceph side.  From the
> Ganesha side, you configure permissions, client ACLs, squashing, and so
> on on a per-export basis, so you'll need different exports if you need
> different settings for each top level directory.  If they can all use
> the same settings, one export is probably better.

There may be performance impact (good or bad) with having separate
exports for CephFS. Each export instantiates a separate instance of
the CephFS client which has its own bookkeeping and set of
capabilities issued by the MDS. Also, each client instance has a
separate big lock (potentially a big deal for performance). If the
data for each export is disjoint (no hard links or shared inodes) and
the NFS server is expected to have a lot of load, breaking out the
exports can have a positive impact on performance. If there are hard
links, then the clients associated with the exports will potentially
fight over capabilities which will add to request latency.)

--
Patrick Donnelly
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux