Re: NFS gateway

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I really think that doing async on big production environments is a no go. But it could very well explain the issues.
Last week I used to test Ganesha and so far the results look promising.

Jan Hugo.


On 09/07/2016 06:31 PM, David wrote:
I have clients accessing CephFS over nfs (kernel nfs). I was seeing slow writes with sync exports. I haven't had a chance to investigate and in the meantime I'm exporting with async (not recommended, but acceptable in my environment). 

I've been meaning to test out Ganesha for a while now

@Sean, have you used Ganesha with Ceph? How does performance compare with kernel nfs?

On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins <jprins@xxxxxxxxxxxx> wrote:
Hi,

One of the use-cases I'm currently testing is the possibility to replace
a NFS storage cluster using a Ceph cluster.

The idea I have is to use a server as an intermediate gateway. On the
client side it will expose a NFS share and on the Ceph side it will
mount the CephFS using mount.ceph. The whole network that holds the Ceph
environment is 10G connected and when I use the same server as S3
gateway I can store files rather quickly. When I use the same server as
a NFS gateway putting data on the Ceph cluster is really very slow.

The reason we want to do this is that we want to create a dedicated Ceph
storage network and have all clients that need some data access either
use S3 or NFS to access the data. I want to do this this way because I
don't want to give the clients in some specific networks full access to
the Ceph filesystem.

Has anyone tried this before? Is this the way to go, or are there better
ways to fix this?

--
Met vriendelijke groet / Best regards,

Jan Hugo Prins
Infra and Isilon storage consultant

Better.be B.V.
Auke Vleerstraat 140 E | 7547 AN Enschede | KvK 08097527
T +31 (0) 53 48 00 694 | M +31 (0)6 26 358 951
jprins@xxxxxxxxxxxx | www.betterbe.com

This e-mail is intended exclusively for the addressee(s), and may not
be passed on to, or made available for use by any person other than
the addressee(s). Better.be B.V. rules out any and every liability
resulting from any electronic transmission.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Met vriendelijke groet / Best regards,

Jan Hugo Prins
Infra and Isilon storage consultant

Better.be B.V.
Auke Vleerstraat 140 E | 7547 AN Enschede | KvK 08097527
T +31 (0) 53 48 00 694 | M +31 (0)6 26 358 951
jprins@xxxxxxxxxxxx | www.betterbe.com

This e-mail is intended exclusively for the addressee(s), and may not
be passed on to, or made available for use by any person other than 
the addressee(s). Better.be B.V. rules out any and every liability 
resulting from any electronic transmission.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux