Re: who is using nfs-ganesha and cephfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,

We have been running the Ganesha FSAL for a while (as far back as Hammer / Ganesha 2.2.0), primarily for uid/gid squashing.

Things are basically OK for our application, but we've seen the following weirdness*:
	- Sometimes there are duplicated entries when directories are listed. Same filename, same inode, just shows up twice in 'ls'.
	- There can be a considerable latency between new files added to CephFS and those files becoming visible on our NFS clients. I understand this might be related to dentry caching. 
	- Occasionally, the Ganesha FSAL seems to max out at 100,000 caps claimed which don't get released until the MDS is restarted.

*note: these issues are with Ganesha 2.2.0 and Hammer/Jewel, and have perhaps since been fixed upstream. 

(We've recently updated to Luminous / Ganesha 2.5.2, and will be happy to complain if any issues show up :))

Cheers,
Lincoln

> On Nov 8, 2017, at 3:41 PM, Sage Weil <sweil@xxxxxxxxxx> wrote:
> 
> Who is running nfs-ganesha's FSAL to export CephFS?  What has your 
> experience been?
> 
> (We are working on building proper testing and support for this into 
> Mimic, but the ganesha FSAL has been around for years.)
> 
> Thanks!
> sage
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux