Hi John, > Exporting kernel client mounts with the kernel NFS server is tested as > part of the regular testing we do on CephFS, so you should find it > pretty stable. This is definitely a legitimate way of putting a layer > of security between your application servers and your storage cluster. > > NFS Ganesha is also an option, that is not as well tested (yet) but it > has the advantage that you can get nice up to date Ceph client code > without worrying about upgrading the kernel. I'm not sure if there > are recent ganesha packages with the ceph FSAL enabled available > online, so you may need to compile your own. The Centos 7 packages have it off by default, and I need them compiled with Ceph 10.2.2 code instead of the default Ceph packeges that come with Centos. So I did indeed do my own recompile. > > When you say you tried using the server as an NFS gateway, was that > with kernel NFS + kernel CephFS? What kind of activities did you find > were running slowly (big files, small files, etc...)? > The gateway was indeed using both kernel NFS and kernel CephFS. I only tested a simple rsync process because I believe that it shows very well what a production use would look like. A lot of reads and writes, a lot of files, all sizes etc. When this showed to be really very slow I put it aside for a while and then got the idea to test Ganesha. -- Met vriendelijke groet / Best regards, Jan Hugo Prins Infra and Isilon storage consultant Better.be B.V. Auke Vleerstraat 140 E | 7547 AN Enschede | KvK 08097527 T +31 (0) 53 48 00 694 | M +31 (0)6 26 358 951 jprins@xxxxxxxxxxxx | www.betterbe.com This e-mail is intended exclusively for the addressee(s), and may not be passed on to, or made available for use by any person other than the addressee(s). Better.be B.V. rules out any and every liability resulting from any electronic transmission. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com