Yep, there's a note at the bottom of [1]:
Note: Only NFS v4.0+ is supported.
Zitat von "Jens Hyllegaard (Soft Design A/S)" <jens.hyllegaard@xxxxxxxxxxxxx>:
Hi.
You are correct. There must be some hardcoded occurences of nfs-ganesha.
I tried creating a new cluster using the ceph nfs cluster create command.
I was still unable to create an export using the management
interface, still got permission errors.
But I created the folder manually and did a chmod 777 on it. I then
made the nfs export using the management interface and pointed it at
the folder.
I am, however, unable to mount the NFS share when specifying only V3
on the export. I noticed you mentioned that NFSv3 is not supported?
Regards
Jens
-----Original Message-----
From: Eugen Block <eblock@xxxxxx>
Sent: 21. december 2020 11:40
To: Jens Hyllegaard (Soft Design A/S) <jens.hyllegaard@xxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re: Re: Setting up NFS with Octopus
Hi,
I am still not sure if I need to create two different pools, one for
NFS daemon and one for the export?
the pool (and/or namespace) you specify in your nfs.yaml is for the
ganesha config only (and should be created for you), it doesn't
store nfs data since that is covered via cephfs (the backend). So
the data
pool(s) of your cephfs are also storing your nfs data. The CephFS
should already be present which seems true in your case.
I'm wondering if the name of your cephfs could be the reason for the
failure, maybe it's hard-coded somewhere (wouldn't be the first
time), but that's not more than an assumption. It could be worth a
try, it shouldn't take too long to tear down the cephfs and recreate
it. If you try that you should also tear down the ganesha pool just
to clean up properly.
The detailed steps are basically covered in [1] (I believe you
already referenced that in this thread though), I only noticed that
the 'ceph nfs export ls' commands don't seem to work, but that
information is present in the dashboard.
Also keep in mind that NFSv3 is not supported when you create the share.
Regards,
Eugen
[1] https://docs.ceph.com/en/latest/cephfs/fs-nfs-exports/
Zitat von "Jens Hyllegaard (Soft Design A/S)"
<jens.hyllegaard@xxxxxxxxxxxxx>:
This is the output from ceph status:
cluster:
id: 9d7bc71a-3f88-11eb-bc58-b9cfbaed27d3
health: HEALTH_WARN
1 pool(s) do not have an application enabled
services:
mon: 3 daemons, quorum
ceph-storage-1.softdesign.dk,ceph-storage-2,ceph-storage-3 (age 4d)
mgr: ceph-storage-1.softdesign.dk.vsrdsm(active, since 4d),
standbys: ceph-storage-3.jglzte
mds: objstore:1 {0=objstore.ceph-storage-1.knaufh=up:active} 1
up:standby
osd: 3 osds: 3 up (since 3d), 3 in (since 3d)
task status:
scrub status:
mds.objstore.ceph-storage-1.knaufh: idle
data:
pools: 4 pools, 97 pgs
objects: 31 objects, 25 KiB
usage: 3.1 GiB used, 2.7 TiB / 2.7 TiB avail
pgs: 97 active+clean
io:
client: 170 B/s rd, 0 op/s rd, 0 op/s wr
So everything seems to ok.
I wonder if anyone could guide me from scratch on how to set up the NFS.
I am still not sure if I need to create two different pools, one for
NFS daemon and one for the export?
Regards
Jens
-----Original Message-----
From: Eugen Block <eblock@xxxxxx>
Sent: 18. december 2020 16:30
To: ceph-users@xxxxxxx
Subject: Re: Setting up NFS with Octopus
What is the cluster status? The permissions seem correct, maybe the
OSDs have a problem?
Zitat von "Jens Hyllegaard (Soft Design A/S)"
<jens.hyllegaard@xxxxxxxxxxxxx>:
I have tried mounting the cephFS as two different users.
I tried creating a user obuser with:
fs authorize objstore client.objuser / rw
And I tried mounting using the admin user.
The mount works as expected, but neither user is able to create files
or folders.
Unless I use sudo, then it works for both users.
The client.objuser keyring is:
client.objuser
key: AQCGodxfuuLxCBAAMjaSNM58JtkkUwO8UqGGYw==
caps: [mds] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=objstore
Regards
Jens
-----Original Message-----
From: Eugen Block <eblock@xxxxxx>
Sent: 18. december 2020 13:25
To: Jens Hyllegaard (Soft Design A/S) <jens.hyllegaard@xxxxxxxxxxxxx>
Cc: 'ceph-users@xxxxxxx' <ceph-users@xxxxxxx>
Subject: Re: Re: Setting up NFS with Octopus
Sorry, I was afk. Did you authorize a client against that new cephfs
volume? I'm not sure because I did it slightly different and it's an
upgraded cluster. But a permission denied sounds like no one is
allowed to write into cephfs.
Zitat von "Jens Hyllegaard (Soft Design A/S)"
<jens.hyllegaard@xxxxxxxxxxxxx>:
I found out how to get the information:
client.nfs.objstore.ceph-storage-3
key: AQBCRNtfsBY8IhAA4MFTghHMT4rq58AvAsPclw==
caps: [mon] allow r
caps: [osd] allow rw pool=objpool namespace=nfs-ns
Regards
Jens
-----Original Message-----
From: Jens Hyllegaard (Soft Design A/S)
<jens.hyllegaard@xxxxxxxxxxxxx>
Sent: 18. december 2020 12:10
To: 'Eugen Block' <eblock@xxxxxx>; 'ceph-users@xxxxxxx'
<ceph-users@xxxxxxx>
Subject: Re: Setting up NFS with Octopus
I am sorry, but I am not sure how to do that? We have just started
working with Ceph.
-----Original Message-----
From: Eugen Block <eblock@xxxxxx>
Sent: 18. december 2020 12:06
To: Jens Hyllegaard (Soft Design A/S)
<jens.hyllegaard@xxxxxxxxxxxxx>
Subject: Re: Re: Setting up NFS with Octopus
Oh you're right, it worked for me, I just tried that with a new path
and it was created for me.
Can you share the client keyrings? I have two nfs daemons running
and they have these permissions:
client.nfs.ses7-nfs.host2
key: AQClNNJf5KHVERAAAzhpp9Mclh5wplrcE9VMkQ==
caps: [mon] allow r
caps: [osd] allow rw pool=nfs-test namespace=ganesha
client.nfs.ses7-nfs.host3
key: AQCqNNJf4rlqBhAARGTMkwXAldeprSYgmPEmJg==
caps: [mon] allow r
caps: [osd] allow rw pool=nfs-test namespace=ganesha
Zitat von "Jens Hyllegaard (Soft Design A/S)"
<jens.hyllegaard@xxxxxxxxxxxxx>:
On the Create NFS export page it says the directory will be created.
Regards
Jens
-----Original Message-----
From: Eugen Block <eblock@xxxxxx>
Sent: 18. december 2020 11:52
To: ceph-users@xxxxxxx
Subject: Re: Setting up NFS with Octopus
Hi,
is the path (/objstore) present within your CephFS? If not you need
to mount the CephFS root first and create your directory to have
NFS access it.
Zitat von "Jens Hyllegaard (Soft Design A/S)"
<jens.hyllegaard@xxxxxxxxxxxxx>:
Hi.
We are completely new to Ceph, and are exploring using it as an
NFS server at first and expand from there.
However we have not been successful in getting a working solution.
I have set up a test environment with 3 physical servers, each
with one OSD using the guide at:
https://docs.ceph.com/en/latest/cephadm/install/
I created a new replicated pool:
ceph osd pool create objpool replicated
And then I deployed the gateway:
ceph orch apply nfs objstore objpool nfs-ns
I then created a new CephFS volume:
ceph fs volume create objstore
So far so good 😊
My problem is when I try to create the NFS export The settings are
as
follows:
Cluster: objstore
Daemons: nfs.objstore
Storage Backend: CephFS
CephFS User ID: admin
CephFS Name: objstore
CephFS Path: /objstore
NFS Protocol: NFSV3
Access Type: RW
Squash: all_squash
Transport protocol: both UDP & TCP
Client: Any client can access
However when I click on Create NFS export, I get:
Failed to create NFS 'objstore:/objstore'
error in mkdirs /objstore: Permission denied [Errno 13]
Has anyone got an idea as to why this is not working?
If you need any further information, do not hesitate to say so.
Best regards,
Jens Hyllegaard
Senior consultant
Soft Design
Rosenkaeret 13 | DK-2860 Søborg | Denmark | +45 39 66 02 00 |
softdesign.dk<http://www.softdesign.dk/> | synchronicer.com
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send
an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send
an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx