Re: ceph recipe for nfs exports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi you all,

I did the changes suggested but the situation is still the same, I set the squash to "all", since I want only "nobody:nogroup" ids but I can't understand where the path should point. If I understood it well, I pass a far disks     ( unpartitioined and so unformatted ) to the osd daemon, and then I create nfs daemons, ceph will autonomously link the nfs shares on the fs managed by the osds, is it correct?


Don't I have to create any filesystem on the osd?

By the way this is the dump

root@cephstage01:~# ceph nfs export info nfs-cephfs /mnt
{
  "access_type": "RW",
  "clients": [],
  "cluster_id": "nfs-cephfs",
  "export_id": 1,
  "fsal": {
    "fs_name": "vol1",
    "name": "CEPH",
    "user_id": "nfs.nfs-cephfs.1"
  },
  "path": "/",
  "protocols": [
    4
  ],
  "pseudo": "/mnt",
  "security_label": true,
  "squash": "all",
  "transports": [
    "TCP"
  ]
}

I can mount it correctly, but when I try to write or touch any file in it, it returns me "Permission denied"

❯ sudo mount -t nfs -o nfsvers=4.1,proto=tcp 192.168.7.80:/mnt /mnt/ceph
❯ touch /mnt/ceph/pino
touch: cannot touch '/mnt/ceph/pino': Permission denied


any suggestion will be appreciated


Rob


On 4/24/24 16:05, Adam King wrote:

    - Although I can mount the export I can't write on it

What error are you getting trying to do the write? The way you set things up doesn't look to different than one of our integration tests for ingress over nfs (https://github.com/ceph/ceph/blob/main/qa/suites/orch/cephadm/smoke-roleless/2-services/nfs-ingress.yaml) and that test tests a simple read/write to the export after creating/mounting it.

    - I can't understand how to use the sdc disks for journaling


you should be able to specify a `journal_devices` section in an OSD spec. For example

*service_type: osd
service_id: foo
placement:
  hosts:
  - vm-00
spec:
  data_devices:
    paths:
    - /dev/vdb
  journal_devices:
    paths:
    - /dev/vdc*
that will make non-colocated OSDs where the devices from the journal_devices section are used as journal devices for the OSDs on the devices in the data_devices section. Although I'd recommend looking through https://docs.ceph.com/en/latest/cephadm/services/osd/#advanced-osd-service-specifications <https://docs.ceph.com/en/latest/cephadm/services/osd/#advanced-osd-service-specifications> and see if there are any other filtering options than the path that can be used first. It's possible the path the device gets can change on reboot and you could end up with cepadm using a device you don't want it to for this as that other device gets the path another device held previously.

    - I can't understand the concept of "pseudo path"


I don't know at a low level either, but it seems to just be the path nfs-ganesha will present to the user. There is another argument to `ceph nfs export create` which is just "path" rather than pseudo-path that marks what actual path within the cephfs the export is mounted on. It's optional and defaults to "/" (so the export you made is mounted at the root of the fs). I think that's the one that really matters. The pseudo-path seems to just act like a user facing name for the path.

On Wed, Apr 24, 2024 at 3:40 AM Roberto Maggi @ Debian <debian108@xxxxxxxxx> wrote:

    Hi you all,

    I'm almost new to ceph and I'm understanding, day by day, why the
    official support is so expansive :)


    I setting up a ceph nfs network cluster whose recipe can be found
    here
    below.

    #######################

    --> cluster creation cephadm bootstrap --mon-ip 10.20.20.81
    --cluster-network 10.20.20.0/24 <http://10.20.20.0/24> --fsid
    $FSID --initial-dashboard-user adm \
    --initial-dashboard-password 'Hi_guys' --dashboard-password-noupdate
    --allow-fqdn-hostname --ssl-dashboard-port 443 \
    --dashboard-crt /etc/ssl/wildcard.it/wildcard.it.crt
    <http://wildcard.it/wildcard.it.crt> --dashboard-key
    /etc/ssl/wildcard.it/wildcard.it.key
    <http://wildcard.it/wildcard.it.key> \
    --allow-overwrite --cleanup-on-failure
    cephadm shell --fsid $FSID -c /etc/ceph/ceph.conf -k
    /etc/ceph/ceph.client.admin.keyring
    cephadm add-repo --release reef && cephadm install ceph-common
    --> adding hosts and set labels
    for IP in $(grep ceph /etc/hosts | awk '{print $1}') ; do
    ssh-copy-id -f
    -i /etc/ceph/ceph.pub root@$IP ; done
    ceph orch host add cephstage01 10.20.20.81 --labels
    _admin,mon,mgr,prometheus,grafana
    ceph orch host add cephstage02 10.20.20.82 --labels
    _admin,mon,mgr,prometheus,grafana
    ceph orch host add cephstage03 10.20.20.83 --labels
    _admin,mon,mgr,prometheus,grafana
    ceph orch host add cephstagedatanode01 10.20.20.84 --labels
    osd,nfs,prometheus
    ceph orch host add cephstagedatanode02 10.20.20.85 --labels
    osd,nfs,prometheus
    ceph orch host add cephstagedatanode03 10.20.20.86 --labels
    osd,nfs,prometheus
    --> network setup and daemons deploy
    ceph config set mon public_network 10.20.20.0/24,192.168.7.0/24
    <http://10.20.20.0/24,192.168.7.0/24>
    ceph orch apply mon
    --placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83"
    ceph orch apply mgr
    --placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83"
    ceph orch apply prometheus
    --placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83,cephstagedatanode01:10.20.20.84,cephstagedatanode02:10.20.20.85,cephstagedatanode03:10.20.20.86"
    ceph orch apply grafana
    --placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83,cephstagedatanode01:10.20.20.84,cephstagedatanode02:10.20.20.85,cephstagedatanode03:10.20.20.86"
    ceph orch apply node-exporter
    ceph orch apply alertmanager
    ceph config set mgr mgr/cephadm/secure_monitoring_stack true
    --> disks and osd setup
    for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ; do
    ssh root@$IP "hostname && wipefs -a -f /dev/sdb&& wipefs -a -f
    /dev/sdc"; done
    ceph config set mgr mgr/cephadm/device_enhanced_scan true
    for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ;
    doceph orch device ls --hostname=$IP --wide --refresh ; done
    for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ;
    doceph orch device zap $IP /dev/sdb; done
    for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ;
    doceph orch device zap $IP /dev/sdc ; done
    for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ;
    doceph orch daemon add osd $IP:/dev/sdb ; done
    for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ;
    doceph orch daemon add osd $IP:/dev/sdc ; done
    --> ganesha nfs cluster
    ceph mgr module enable nfs
    ceph fs volume create vol1
    ceph nfs cluster create nfs-cephfs
    "cephstagedatanode01,cephstagedatanode02,cephstagedatanode03"
    --ingress
    --virtual-ip 192.168.7.80 --ingress-mode default
    ceph nfs export create cephfs --cluster-id nfs-cephfs
    --pseudo-path /mnt
    --fsname vol1
    --> nfs mount
    mount -t nfs -o nfsvers=4.1,proto=tcp 192.168.7.80:/mnt /mnt/ceph


    is my recipe correct?


    the cluster is set up by 3 mon/mgr nodes and 3 osd/nfs nodes, on the
    latters I installed one 3tb ssd, for the data, and one 300gb ssd
    for the
    journaling but

    my problems are :

    - Although I can mount the export I can't write on it

    - I can't understand how to use the sdc disks for journaling

    - I can't understand the concept of "pseudo path"


    here below you can find the json output of the exports

    --> check
    ceph nfs export ls nfs-cephfs
    ceph nfs export info nfs-cephfs /mnt
    ------------------------------------
    json file
    ---------
    {
    "export_id": 1,
    "path": "/",
    "cluster_id": "nfs-cephfs",
    "pseudo": "/mnt",
    "access_type": "RW",
    "squash": "none",
    "security_label": true,
    "protocols": [
    4
    ],
    "transports": [
    "TCP"
    ],
    "fsal": {
    "name": "CEPH",
    "user_id": "nfs.nfs-cephfs.1",
    "fs_name": "vol1"
    },
    "clients": []
    }
    ------------------------------------


    Thanks in advance

    Rob



    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux