Re: NFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ganesha can export CephFS or RGW. It cannot export anything else (like iscsi or RBD). Config for RGW looks like this:

EXPORT
{
        Export_ID=1;
        Path = "/";
        Pseudo = "/rgw";
        Access_Type = RW;
        Protocols = 4;
        Transports = TCP;
        FSAL {
                Name = RGW;
                User_Id = "testuser";
                Access_Key_Id ="<substitute yours>";
                Secret_Access_Key = "<substitute yours>";
        }
}

RGW {
        ceph_conf = "/<substitute path to>/ceph.conf";
        # for vstart cluster, name = "client.admin"
        name = "client.rgw.foohost";
        cluster = "ceph";
#       init_args = "-d --debug-rgw=16";
}


Daniel

On 9/30/19 3:01 PM, Marc Roos wrote:
Just install these

http://download.ceph.com/nfs-ganesha/
nfs-ganesha-rgw-2.7.1-0.1.el7.x86_64
nfs-ganesha-vfs-2.7.1-0.1.el7.x86_64
libnfsidmap-0.25-19.el7.x86_64
nfs-ganesha-mem-2.7.1-0.1.el7.x86_64
nfs-ganesha-xfs-2.7.1-0.1.el7.x86_64
nfs-ganesha-2.7.1-0.1.el7.x86_64
nfs-ganesha-ceph-2.7.1-0.1.el7.x86_64


And export your cephfs like this:
EXPORT {
         Export_Id = 10;
         Path = /nfs/cblr-repos;
         Pseudo = /cblr-repos;
         FSAL { Name = CEPH; User_Id = "cephfs.nfs.cblr";
Secret_Access_Key = "xxx"; }
         Disable_ACL = FALSE;
         CLIENT { Clients = 192.168.10.2; access_type = "RW"; }
         CLIENT { Clients = 192.168.10.253; }
}


-----Original Message-----
From: Brent Kennedy [mailto:bkennedy@xxxxxxxxxx]
Sent: maandag 30 september 2019 20:56
To: 'ceph-users'
Subject:  NFS

Wondering if there are any documents for standing up NFS with an
existing ceph cluster.  We don’t use ceph-ansible or any other tools
besides ceph-deploy.  The iscsi directions were pretty good once I got
past the dependencies.

I saw the one based on Rook, but it doesn’t seem to apply to our setup
of ceph vms with physical hosts doing OSDs.  The official ceph documents
talk about using ganesha but doesn’t seem to dive into the details of
what the process is for getting it online.  We don’t use cephfs, so
that’s not setup either.  The basic docs seem to note this is required.
  Seems my google-fu is failing me when I try to find a more definitive
guide.

The servers are all centos 7 with the latest updates.

Any guidance would be greatly appreciated!

Regards,

-Brent

Existing Clusters:

Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi
gateways ( all virtual on nvme )

US Production(HDD): Nautilus 14.2.2 with 13 osd servers, 3 mons, 4
gateways, 2 iscsi gateways

UK Production(HDD): Nautilus 14.2.2 with 25 osd servers, 3 mons/man, 3
gateways behind

US Production(SSD): Nautilus 14.2.2 with 6 osd servers, 3 mons/man, 3
gateways, 2 iscsi gateways


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux