Based on the advice of some people on this list I have started testing Ganesha-NFS in combination with Ceph. First results are very good and the product looks promising. When I want to use this I need to create a setup where different systems can mount different parts of the tree. How do I configure this? Do I just need to create different sets of exports, all with the specific tree they export and the correct access rights? I read somewhere that you can have a distributed lock manager. Does this mean that I could create a cluster of multiple NFS servers that all share the same CephFS filesystem and that I can spread the client load? Jan Hugo On 09/07/2016 04:30 PM, jan hugo prins wrote: > Hi, > > One of the use-cases I'm currently testing is the possibility to replace > a NFS storage cluster using a Ceph cluster. > > The idea I have is to use a server as an intermediate gateway. On the > client side it will expose a NFS share and on the Ceph side it will > mount the CephFS using mount.ceph. The whole network that holds the Ceph > environment is 10G connected and when I use the same server as S3 > gateway I can store files rather quickly. When I use the same server as > a NFS gateway putting data on the Ceph cluster is really very slow. > > The reason we want to do this is that we want to create a dedicated Ceph > storage network and have all clients that need some data access either > use S3 or NFS to access the data. I want to do this this way because I > don't want to give the clients in some specific networks full access to > the Ceph filesystem. > > Has anyone tried this before? Is this the way to go, or are there better > ways to fix this? > -- Met vriendelijke groet / Best regards, Jan Hugo Prins Infra and Isilon storage consultant Better.be B.V. Auke Vleerstraat 140 E | 7547 AN Enschede | KvK 08097527 T +31 (0) 53 48 00 694 | M +31 (0)6 26 358 951 jprins@xxxxxxxxxxxx | www.betterbe.com This e-mail is intended exclusively for the addressee(s), and may not be passed on to, or made available for use by any person other than the addressee(s). Better.be B.V. rules out any and every liability resulting from any electronic transmission. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com