Re: NFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Awesome, thank you giving an overview of these features, sounds like the correct direction then!

-Brent

-----Original Message-----
From: Daniel Gryniewicz <dang@xxxxxxxxxx> 
Sent: Thursday, October 3, 2019 8:20 AM
To: Brent Kennedy <bkennedy@xxxxxxxxxx>
Cc: Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>; ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re:  NFS

So, Ganesha is an NFS gateway, living in userspace.  It provides access via NFS (for any NFS client) to a number of clustered storage systems, or to local filesystems on it's host.  It can run on any system that has access to the cluster (ceph in this case).  One Ganesha instance can serve quite a few clients (the limit typically being either memory on the Ganesha node or network bandwidth).
Ganesha's configuration lives in /etc/ganesha/ganesha.conf.  There should be man pages related to Ganesha and it's configuration installed when Ganesha is installed.

Ganesha has a number of FSALs (File System Abstraction Layers) that work with a number of different clustered storage systems.  For Ceph, Ganesha has 2 FSALs: FSAL_CEPH works on top of CephFS, and FSAL_RGW works on top of RadosGW.  FSAL_CEPH provides full NFS semantics, sinces CephFS is a full POSIX filesystem; FSAL_RGW provides slightly limited semantics, since RGW itself it not POSIX and doesn't provide everything.  For example, you cannot write into an arbitrary location within a file, you can only overwrite the entire file.

Anything you can store in the underlying storage (CephFS or RadosGW) can be stored/accessed by Ganesha.  So, 20+GB files should work fine on either one.

Daniel

On Tue, Oct 1, 2019 at 10:45 PM Brent Kennedy <bkennedy@xxxxxxxxxx> wrote:
>
> We might have to backup a step here so I can understand.  Are you 
> saying stand up a new VM with just those packages installed, then 
> configure the export file  ( the file location isn’t mentioned in the 
> ceph docs ) and supposedly a client can connect to them?  ( only linux 
> clients or any NFS client? )
>
> I don’t use cephFS, so being that it will be an object storage backend, will that be ok with multiple hosts accessing files through the NFS one gateway or should I configure multiple gateways ( one for each share )?
>
> I was hoping to save large files( 20+ GB ), should I stand up cephFS instead for this?
>
> I am used to using a NAS storage appliance server(or freeNAS ), so 
> using ceph as a NAS backend is new to me ( thus I might be over 
> thinking this )  :)
>
> -Brent
>
> -----Original Message-----
> From: Daniel Gryniewicz <dang@xxxxxxxxxx>
> Sent: Tuesday, October 1, 2019 8:20 AM
> To: Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>; bkennedy 
> <bkennedy@xxxxxxxxxx>; ceph-users <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  NFS
>
> Ganesha can export CephFS or RGW.  It cannot export anything else (like iscsi or RBD).  Config for RGW looks like this:
>
> EXPORT
> {
>          Export_ID=1;
>          Path = "/";
>          Pseudo = "/rgw";
>          Access_Type = RW;
>          Protocols = 4;
>          Transports = TCP;
>          FSAL {
>                  Name = RGW;
>                  User_Id = "testuser";
>                  Access_Key_Id ="<substitute yours>";
>                  Secret_Access_Key = "<substitute yours>";
>          }
> }
>
> RGW {
>          ceph_conf = "/<substitute path to>/ceph.conf";
>          # for vstart cluster, name = "client.admin"
>          name = "client.rgw.foohost";
>          cluster = "ceph";
> #       init_args = "-d --debug-rgw=16";
> }
>
>
> Daniel
>
> On 9/30/19 3:01 PM, Marc Roos wrote:
> >
> > Just install these
> >
> > http://download.ceph.com/nfs-ganesha/
> > nfs-ganesha-rgw-2.7.1-0.1.el7.x86_64
> > nfs-ganesha-vfs-2.7.1-0.1.el7.x86_64
> > libnfsidmap-0.25-19.el7.x86_64
> > nfs-ganesha-mem-2.7.1-0.1.el7.x86_64
> > nfs-ganesha-xfs-2.7.1-0.1.el7.x86_64
> > nfs-ganesha-2.7.1-0.1.el7.x86_64
> > nfs-ganesha-ceph-2.7.1-0.1.el7.x86_64
> >
> >
> > And export your cephfs like this:
> > EXPORT {
> >          Export_Id = 10;
> >          Path = /nfs/cblr-repos;
> >          Pseudo = /cblr-repos;
> >          FSAL { Name = CEPH; User_Id = "cephfs.nfs.cblr"; 
> > Secret_Access_Key = "xxx"; }
> >          Disable_ACL = FALSE;
> >          CLIENT { Clients = 192.168.10.2; access_type = "RW"; }
> >          CLIENT { Clients = 192.168.10.253; } }
> >
> >
> > -----Original Message-----
> > From: Brent Kennedy [mailto:bkennedy@xxxxxxxxxx]
> > Sent: maandag 30 september 2019 20:56
> > To: 'ceph-users'
> > Subject:  NFS
> >
> > Wondering if there are any documents for standing up NFS with an 
> > existing ceph cluster.  We don’t use ceph-ansible or any other tools 
> > besides ceph-deploy.  The iscsi directions were pretty good once I 
> > got past the dependencies.
> >
> >
> >
> > I saw the one based on Rook, but it doesn’t seem to apply to our 
> > setup of ceph vms with physical hosts doing OSDs.  The official ceph 
> > documents talk about using ganesha but doesn’t seem to dive into the 
> > details of what the process is for getting it online.  We don’t use 
> > cephfs, so that’s not setup either.  The basic docs seem to note this is required.
> >   Seems my google-fu is failing me when I try to find a more 
> > definitive guide.
> >
> >
> >
> > The servers are all centos 7 with the latest updates.
> >
> >
> >
> > Any guidance would be greatly appreciated!
> >
> >
> >
> > Regards,
> >
> > -Brent
> >
> >
> >
> > Existing Clusters:
> >
> > Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 
> > iscsi gateways ( all virtual on nvme )
> >
> > US Production(HDD): Nautilus 14.2.2 with 13 osd servers, 3 mons, 4 
> > gateways, 2 iscsi gateways
> >
> > UK Production(HDD): Nautilus 14.2.2 with 25 osd servers, 3 mons/man, 
> > 3 gateways behind
> >
> > US Production(SSD): Nautilus 14.2.2 with 6 osd servers, 3 mons/man, 
> > 3 gateways, 2 iscsi gateways
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux