Re: ceph recipe for nfs exports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wow, you made it farther than I did.  I got it installed, added hosts, then NOTHING.  It showed there were physical disks on the hosts but wouldn't create the OSDs.  Command was accepted, but NOTHING happened.  No output, no error, no NOTHING.  I fought with it for over a week and finally gave up, as with no feedback as to what the issue is it's impossible to troubleshoot.  A product that does NOTHING isn't a product at all.  I posted a detailed message here with screenshots, steps, everything somebody would need to reproduce my situation.  The post got blocked because it was too big and sent for moderation.  It never got approved or rejected.  So I moved on, can't be using something that does NOTHING with no way to proceed past that point.
________________________________
From: Roberto Maggi @ Debian <debian108@xxxxxxxxx>
Sent: April 24, 2024 01:39
To: Ceph Users <ceph-users@xxxxxxx>
Subject:  ceph recipe for nfs exports

[You don't often get email from debian108@xxxxxxxxx. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]

Hi you all,

I'm almost new to ceph and I'm understanding, day by day, why the
official support is so expansive :)


I setting up a ceph nfs network cluster whose recipe can be found here
below.

#######################

--> cluster creation cephadm bootstrap --mon-ip 10.20.20.81
--cluster-network 10.20.20.0/24 --fsid $FSID --initial-dashboard-user adm \
--initial-dashboard-password 'Hi_guys' --dashboard-password-noupdate
--allow-fqdn-hostname --ssl-dashboard-port 443 \
--dashboard-crt /etc/ssl/wildcard.it/wildcard.it.crt --dashboard-key
/etc/ssl/wildcard.it/wildcard.it.key \
--allow-overwrite --cleanup-on-failure
cephadm shell --fsid $FSID -c /etc/ceph/ceph.conf -k
/etc/ceph/ceph.client.admin.keyring
cephadm add-repo --release reef && cephadm install ceph-common
--> adding hosts and set labels
for IP in $(grep ceph /etc/hosts | awk '{print $1}') ; do ssh-copy-id -f
-i /etc/ceph/ceph.pub root@$IP ; done
ceph orch host add cephstage01 10.20.20.81 --labels
_admin,mon,mgr,prometheus,grafana
ceph orch host add cephstage02 10.20.20.82 --labels
_admin,mon,mgr,prometheus,grafana
ceph orch host add cephstage03 10.20.20.83 --labels
_admin,mon,mgr,prometheus,grafana
ceph orch host add cephstagedatanode01 10.20.20.84 --labels
osd,nfs,prometheus
ceph orch host add cephstagedatanode02 10.20.20.85 --labels
osd,nfs,prometheus
ceph orch host add cephstagedatanode03 10.20.20.86 --labels
osd,nfs,prometheus
--> network setup and daemons deploy
ceph config set mon public_network 10.20.20.0/24,192.168.7.0/24
ceph orch apply mon
--placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83"
ceph orch apply mgr
--placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83"
ceph orch apply prometheus
--placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83,cephstagedatanode01:10.20.20.84,cephstagedatanode02:10.20.20.85,cephstagedatanode03:10.20.20.86"
ceph orch apply grafana
--placement="cephstage01:10.20.20.81,cephstage02:10.20.20.82,cephstage03:10.20.20.83,cephstagedatanode01:10.20.20.84,cephstagedatanode02:10.20.20.85,cephstagedatanode03:10.20.20.86"
ceph orch apply node-exporter
ceph orch apply alertmanager
ceph config set mgr mgr/cephadm/secure_monitoring_stack true
--> disks and osd setup
for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ; do
ssh root@$IP "hostname && wipefs -a -f /dev/sdb&& wipefs -a -f
/dev/sdc"; done
ceph config set mgr mgr/cephadm/device_enhanced_scan true
for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ;
doceph orch device ls --hostname=$IP --wide --refresh ; done
for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ;
doceph orch device zap $IP /dev/sdb; done
for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ;
doceph orch device zap $IP /dev/sdc ; done
for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ;
doceph orch daemon add osd $IP:/dev/sdb ; done
for IP in $(grep cephstagedatanode/etc/hosts | awk '{print $1}') ;
doceph orch daemon add osd $IP:/dev/sdc ; done
--> ganesha nfs cluster
ceph mgr module enable nfs
ceph fs volume create vol1
ceph nfs cluster create nfs-cephfs
"cephstagedatanode01,cephstagedatanode02,cephstagedatanode03" --ingress
--virtual-ip 192.168.7.80 --ingress-mode default
ceph nfs export create cephfs --cluster-id nfs-cephfs --pseudo-path /mnt
--fsname vol1
--> nfs mount
mount -t nfs -o nfsvers=4.1,proto=tcp 192.168.7.80:/mnt /mnt/ceph


is my recipe correct?


the cluster is set up by 3 mon/mgr nodes and 3 osd/nfs nodes, on the
latters I installed one 3tb ssd, for the data, and one 300gb ssd for the
journaling but

my problems are :

- Although I can mount the export I can't write on it

- I can't understand how to use the sdc disks for journaling

- I can't understand the concept of "pseudo path"


here below you can find the json output of the exports

--> check
ceph nfs export ls nfs-cephfs
ceph nfs export info nfs-cephfs /mnt
------------------------------------
json file
---------
{
"export_id": 1,
"path": "/",
"cluster_id": "nfs-cephfs",
"pseudo": "/mnt",
"access_type": "RW",
"squash": "none",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "nfs.nfs-cephfs.1",
"fs_name": "vol1"
},
"clients": []
}
------------------------------------


Thanks in advance

Rob



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux