Re: Is there a better way to make a samba/nfs gateway?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think I may have cheated...  

I setup the ceph iscsi gateway in HA mode, then a freenas server.  Connected
the freenas server to the iscsi targets and poof, I have a universal NFS
share(s).  I stood up a few freenas servers to share various loads.  We also
use the iscsi gateways for direct esxi host connections ( but I will warn
you the esxi iscsi connections seem a little wonkie ).  I havent upgraded to
the latest ceph yet, so perhaps more improvements to iscsi have been made
since this was done back in October.  FreeNAS talks to both windows and
linux clients, no issues and securely.

Regards,
-Brent

Existing Clusters:
Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi
gateways ( all virtual on nvme )
US Production(HDD): Nautilus 14.2.2 with 11 osd servers, 3 mons, 4 gateways,
2 iscsi gateways
UK Production(HDD): Nautilus 14.2.2 with 12 osd servers, 3 mons, 4 gateways
US Production(SSD): Nautilus 14.2.2 with 6 osd servers, 3 mons, 3 gateways,
2 iscsi gateways



-----Original Message-----
From: Seth Galitzer <sgsax@xxxxxxx> 
Sent: Wednesday, March 11, 2020 12:16 PM
To: ceph-users@xxxxxxx
Subject:  Is there a better way to make a samba/nfs gateway?

I have a hybrid environment and need to share with both Linux and Windows
clients. For my previous iterations of file storage, I exported nfs and
samba shares directly from my monolithic file server. All Linux clients used
nfs and all Windows clients used samba. Now that I've switched to ceph,
things are a bit more complicated. I built a gateway to export nfs and samba
as needed, and connect that as a client to my ceph cluster.

After having file locking problems with kernel nfs, I made the switch to
nfs-ganesha, which has helped immensely. For Linux clients that have high
I/O needs, like desktops and some web servers, I connect to ceph directly
for those shares. For all other Linux needs, I use nfs from the gateway. For
all Windows clients (desktops and a small number of servers), I use samba
exported from the gateway.

Since my ceph cluster went live in August, I have had some kind of strange
(to me) error at least once a week, almost always related to the gateway
client. Last night, it was MDS_CLIENT_OLDEST_TID. Since we're on Spring
Break at my university and not very busy, I decided to unmount/remount the
ceph share, requiring stopping nfs and samba services. Stopping nfs-ganesha
took a while, but it finally completed with no complaints from the ceph
cluster. Stopping samba took longer and gave me MDS_SLOW_REQUEST and
MDS_CLIENT_LATE_RELEASE on the mds. It finally finished, and I was able to
unmount/remount the ceph share and that finally cleared all the errors.

This is leading me to believe that samba on the gateway and all the clients
attaching to that is putting a strain on the connection back to ceph. Which
finally brings me to my question: is there a better way to export samba to
my clients using the ceph back end? Or is this as good as it gets and I just
have to put up with the seemingly frequent errors? 
I can live with the errors and have been able to handle them so far, but I
know people who have much bigger clusters and many more clients than me (by
an order of magnitude) and don't see nearly as many errors as I do. Which is
why I'm trying to figure out what is special about my setup.

All my ceph nodes are running latest nautilus on Centos 7 (I just updated
last week to 14.2.8), as is the gateway host. I'm mounting ceph directly on
the gateway (by way of the kernel using cephfs, not
rados/rbd) to a single mount point and exporting from there.

My searches so far have not turned up anything extraordinarily useful, so
I'm asking for some guidance here. Any advice is welcome.

Thanks.
Seth

--
Seth Galitzer
Systems Coordinator
Computer Science Department
Kansas State University
http://www.cs.ksu.edu/~sgsax
sgsax@xxxxxxx
785-532-7790
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux