Re: Is there a better way to make a samba/nfs gateway?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Seth,

I don't know if this helps you, but I'll share what we do.  We present a
large amount of CephFS using NFS and SMB and a handful of cephfs direct
clients, and rarely encounter issues with either frontends or CephFS.

However, the 'gateway' is multiple servers - we use 2x ganesha servers with
ceph FSAL (2 of them) to export NFS. At this stage nothing fancy (no HA),
but will certainly look to do that sometime this year. I actually expect
you get better NFS perf exporting using VFS FSAL with kernel mounted cephfs.
SMB is same as yours, kernel cephfs but mounted on several machines each
running samba, coordinated using a CTDB cluster, and rr DNS for client
connections.

We don't mix smb and nfs dirs (no one has asked yet), so i'm not sure how
nice this would play. But it is likely to work/not work as well as any
other FS.

There are some tweaks particularly in ganesha if you are using the ceph
FSAL, like disabling caching (see
https://github.com/nfs-ganesha/nfs-ganesha/blob/master/src/config_samples/ceph.conf
)

It's hard to comment without knowing more details about your setup, but it
sounds like you are doing NFS and SMB from the same machine, I'd consider
splitting up those functions, and sizing the servers appropriately. And
most importantly, make sure your MDS has lots of RAM and the metadata pool
is on flash only.

Raf

On Thu, 12 Mar 2020 at 03:17, Seth Galitzer <sgsax@xxxxxxx> wrote:

> I have a hybrid environment and need to share with both Linux and
> Windows clients. For my previous iterations of file storage, I exported
> nfs and samba shares directly from my monolithic file server. All Linux
> clients used nfs and all Windows clients used samba. Now that I've
> switched to ceph, things are a bit more complicated. I built a gateway
> to export nfs and samba as needed, and connect that as a client to my
> ceph cluster.
>
> After having file locking problems with kernel nfs, I made the switch to
> nfs-ganesha, which has helped immensely. For Linux clients that have
> high I/O needs, like desktops and some web servers, I connect to ceph
> directly for those shares. For all other Linux needs, I use nfs from the
> gateway. For all Windows clients (desktops and a small number of
> servers), I use samba exported from the gateway.
>
> Since my ceph cluster went live in August, I have had some kind of
> strange (to me) error at least once a week, almost always related to the
> gateway client. Last night, it was MDS_CLIENT_OLDEST_TID. Since we're on
> Spring Break at my university and not very busy, I decided to
> unmount/remount the ceph share, requiring stopping nfs and samba
> services. Stopping nfs-ganesha took a while, but it finally completed
> with no complaints from the ceph cluster. Stopping samba took longer and
> gave me MDS_SLOW_REQUEST and MDS_CLIENT_LATE_RELEASE on the mds. It
> finally finished, and I was able to unmount/remount the ceph share and
> that finally cleared all the errors.
>
> This is leading me to believe that samba on the gateway and all the
> clients attaching to that is putting a strain on the connection back to
> ceph. Which finally brings me to my question: is there a better way to
> export samba to my clients using the ceph back end? Or is this as good
> as it gets and I just have to put up with the seemingly frequent errors?
> I can live with the errors and have been able to handle them so far, but
> I know people who have much bigger clusters and many more clients than
> me (by an order of magnitude) and don't see nearly as many errors as I
> do. Which is why I'm trying to figure out what is special about my setup.
>
> All my ceph nodes are running latest nautilus on Centos 7 (I just
> updated last week to 14.2.8), as is the gateway host. I'm mounting ceph
> directly on the gateway (by way of the kernel using cephfs, not
> rados/rbd) to a single mount point and exporting from there.
>
> My searches so far have not turned up anything extraordinarily useful,
> so I'm asking for some guidance here. Any advice is welcome.
>
> Thanks.
> Seth
>
> --
> Seth Galitzer
> Systems Coordinator
> Computer Science Department
> Kansas State University
> http://www.cs.ksu.edu/~sgsax
> sgsax@xxxxxxx
> 785-532-7790
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
*Rafael Lopez*
Devops Systems Engineer
Monash University eResearch Centre

T: +61 3 9905 9118
M: +61 (0)427682670 <%2B61%204%2027682%20670>
E: rafael.lopez@xxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux