Re: Is there a better way to make a samba/nfs gateway?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

we have a CTDB based HA Samba in our Ceph Management Solution.
It works like a charm and we connect it to existing active directories as
well.

It's based on vfs_ceph and you can read more about how to configure it
yourself on
https://www.samba.org/samba/docs/current/man-html/vfs_ceph.8.html.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.verges@xxxxxxxx
Chat: https://t.me/MartinVerges

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx


Am Fr., 13. März 2020 um 13:06 Uhr schrieb Nathan Fish <lordcirth@xxxxxxxxx
>:

> Note that we have had issues with deadlocks when re-exporting CephFS
> via Samba. It appears to only occur with Mac clients, though. In some
> cases it has hung on a request for a high-level directory and hung
> that branch for all clients.
>
> On Fri, Mar 13, 2020 at 1:56 AM Konstantin Shalygin <k0ste@xxxxxxxx>
> wrote:
> >
> >
> > On 3/11/20 11:16 PM, Seth Galitzer wrote:
> > > I have a hybrid environment and need to share with both Linux and
> > > Windows clients. For my previous iterations of file storage, I
> > > exported nfs and samba shares directly from my monolithic file server.
> > > All Linux clients used nfs and all Windows clients used samba. Now
> > > that I've switched to ceph, things are a bit more complicated. I built
> > > a gateway to export nfs and samba as needed, and connect that as a
> > > client to my ceph cluster.
> > >
> > > After having file locking problems with kernel nfs, I made the switch
> > > to nfs-ganesha, which has helped immensely. For Linux clients that
> > > have high I/O needs, like desktops and some web servers, I connect to
> > > ceph directly for those shares. For all other Linux needs, I use nfs
> > > from the gateway. For all Windows clients (desktops and a small number
> > > of servers), I use samba exported from the gateway.
> > >
> > > Since my ceph cluster went live in August, I have had some kind of
> > > strange (to me) error at least once a week, almost always related to
> > > the gateway client. Last night, it was MDS_CLIENT_OLDEST_TID. Since
> > > we're on Spring Break at my university and not very busy, I decided to
> > > unmount/remount the ceph share, requiring stopping nfs and samba
> > > services. Stopping nfs-ganesha took a while, but it finally completed
> > > with no complaints from the ceph cluster. Stopping samba took longer
> > > and gave me MDS_SLOW_REQUEST and MDS_CLIENT_LATE_RELEASE on the mds.
> > > It finally finished, and I was able to unmount/remount the ceph share
> > > and that finally cleared all the errors.
> > >
> > > This is leading me to believe that samba on the gateway and all the
> > > clients attaching to that is putting a strain on the connection back
> > > to ceph. Which finally brings me to my question: is there a better way
> > > to export samba to my clients using the ceph back end? Or is this as
> > > good as it gets and I just have to put up with the seemingly frequent
> > > errors? I can live with the errors and have been able to handle them
> > > so far, but I know people who have much bigger clusters and many more
> > > clients than me (by an order of magnitude) and don't see nearly as
> > > many errors as I do. Which is why I'm trying to figure out what is
> > > special about my setup.
> > >
> > > All my ceph nodes are running latest nautilus on Centos 7 (I just
> > > updated last week to 14.2.8), as is the gateway host. I'm mounting
> > > ceph directly on the gateway (by way of the kernel using cephfs, not
> > > rados/rbd) to a single mount point and exporting from there.
> > >
> > > My searches so far have not turned up anything extraordinarily useful,
> > > so I'm asking for some guidance here. Any advice is welcome.
> >
> > You can connect to your cluster directly from userland, without kernel.
> > Use Samba vfs_ceph for this.
> >
> >
> >
> > k
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux