Re: Ceph nfs ganesha exports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2019-07-28 at 18:20 +0000, Lee Norvall wrote:
> Update to this I found that you cannot create a 2nd files system as yet and it is still experimental.  So I went down this route:
> 
> Added a pool to the existing cephfs and then setfattr -n ceph.dir.layout.pool -v SSD-NFS /mnt/cephfs/ssdnfs/ from a ceph-fuse client.
> 
> I then nfs mounted from another box. I can see the files and dir etc from the nfs client but my issue now is that I do not have permission to write, create dir etc.  The same goes for the default setup after running the ansible playbook even when setting export to no_root_squash.  I am missing a chain of permission?  ganesha-nfs is using admin userid, is this the same as the client.admin or is this a user I need to create?  Any info appreciated.
> 
> Ceph is on CentOS 7 and SELinux is currently off as well.
> 
> Copy of the ganesha conf below.  Is secType correct or is it missing something?
> 
> RADOS_URLS {
>    ceph_conf = '/etc/ceph/ceph.conf';
>    userid = "admin";
> }
> %url rados://cephfs_data/ganesha-export-index
> 
> NFSv4 {
>         RecoveryBackend = 'rados_kv';
> }

I your earlier email, you mentioned that you had more than one NFS
server, but rados_kv is not safe in a multi-server configuration. The
servers will be competing to store recovery information in the same
objects, and won't honor each others' grace periods/

You may want to explore using "RecoveryBackend = rados_cluster" instead,
which should handle that situation better. See this writeup, for some
guidelines:

    https://jtlayton.wordpress.com/2018/12/10/deploying-an-active-active-nfs-cluster-over-cephfs/

Much of this is already automated too if you use k8s+rook.

> RADOS_KV {
>         ceph_conf = '/etc/ceph/ceph.conf';
>         userid = "admin";
>         pool = "cephfs_data";
> }
> 
> EXPORT
> {
>         Export_id=20133;
>         Path = "/";
>         Pseudo = /cephfile;
>         Access_Type = RW;
>         Protocols = 3,4;
>         Transports = TCP;
>         SecType = sys,krb5,krb5i,krb5p;
>         Squash = Root_Squash;
>         Attr_Expiration_Time = 0;
> 
>         FSAL {
>                 Name = CEPH;
>                 User_Id = "admin";
>         }
> 
> 
> }
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On 28/07/2019 12:11, Lee Norvall wrote:
> > Hi
> > 
> > I am using ceph-ansible to deploy and just looking for best way/tips on 
> > how to export multiple pools/fs.
> > 
> > Ceph: nautilus (14.2.2)
> > NFS-Ganesha v 2.8
> > ceph-ansible stable 4.0
> > 
> > I have 3 x osd/NFS gateways running and NFS on the dashboard can see 
> > them in the cluster.  I have managed to export for cephfs / and mounted 
> > it on another box.
> > 
> > 1) can I add a new pool/fs to the export under that same NFS gateway 
> > cluster, or
> > 
> > 2) do I have the to do something like add a new pool to the fs and then 
> > setfattr to make the layout /newfs_dir point to /new_pool?  does this 
> > cause issues and false object count?
> > 
> > 3) any other better ways...
> > 
> > Rgds
> > 
> > Lee
> > 
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> -- 
>  
> 
> Lee Norvall | CEO / Founder 
> Mob. +44 (0)7768 201884 
> Tel. +44 (0)20 3026 8930 
> Web. www.blocz.io 
> 
> Enterprise Cloud | Private Cloud | Hybrid/Multi Cloud | Cloud Backup 
> 
>                 
> 
> This e-mail (and any attachment) has been sent from a PC belonging to My Mind (Holdings) Limited. If you receive it in error, please tell us by return and then delete it from your system; you may not rely on its contents nor copy/disclose it to anyone. Opinions, conclusions and statements of intent in this e-mail are those of the sender and will not bind My Mind (Holdings) Limited unless confirmed by an authorised representative independently of this message. We do not accept responsibility for viruses; you must scan for these. Please note that e-mails sent to and from blocz IO Limited are routinely monitored for record keeping, quality control and training purposes, to ensure regulatory compliance and to prevent viruses and unauthorised use of our computer systems. My Mind (Holdings) Limited is registered in England & Wales under company number 10186410. Registered office: 1st Floor Offices, 2a Highfield Road, Ringwood, Hampshire, United Kingdom, BH24 1RQ. VAT Registration GB 244 
 9628 77
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Jeff Layton <jlayton@xxxxxxxxxxxxxxx>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux