Hi,
On 4/24/24 09:39, Roberto Maggi @ Debian wrote:
ceph orch host add cephstage01 10.20.20.81 --labels _admin,mon,mgr,prometheus,grafana
ceph orch host add cephstage02 10.20.20.82 --labels _admin,mon,mgr,prometheus,grafana
ceph orch host add cephstage03 10.20.20.83 --labels _admin,mon,mgr,prometheus,grafana
ceph orch host add cephstagedatanode01 10.20.20.84 --labels osd,nfs,prometheus
ceph orch host add cephstagedatanode02 10.20.20.85 --labels osd,nfs,prometheus
ceph orch host add cephstagedatanode03 10.20.20.86 --labels osd,nfs,prometheus
--> network setup and daemons deploy
ceph config set mon public_network 10.20.20.0/24,192.168.7.0/24
ceph orch apply mon --placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83"
ceph orch apply mgr --placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83"
ceph orch apply prometheus --placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83,cephstagedatanode01:10.20.20.84,cephstagedatanode02:10.20.20.85,cephstagedatanode03:10.20.20.86"
ceph orch apply grafana --placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83,cephstagedatanode01:10.20.20.84,cephstagedatanode02:10.20.20.85,cephstagedatanode03:10.20.20.86"
Two remarks here:
- You are labeling all the hosts but then you use a hostname based placement strategy for the services.
Why not use the labels for placing the services?
- Usually you only need one Prometheus, one Grafana and one Alert-Manager in the cluster.
There is no need to deploy these on each host.
ceph nfs export create cephfs --cluster-id nfs-cephfs --pseudo-path /mnt
--fsname vol1
--> nfs mount
mount -t nfs -o nfsvers=4.1,proto=tcp 192.168.7.80:/mnt /mnt/ceph
is my recipe correct?
Apart from the remarks above it should get you a working NFS export.
- Although I can mount the export I can't write on it
You have not specified a value for --squash when creating the export.
Your CephFS is empty and the root directory is only writable by the root user, but this gets "squashed" to nobody when using NFS.
- I can't understand how to use the sdc disks for journaling
When all your devices are SSD you do not need "journaling" (which is today the RocksDB and WAL).
- I can't understand the concept of "pseudo path"
This is an NFSv4 concept. It allows to mount a virtual root of the NFS server and access all exports below it without having to mount each one separately.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx