Hi,
I setup a small 3 node cluster as a POC. I bootstrapped the cluster with
separate networks for frontend (public network 192.168.30.0/24) and
backend (cluster network 192.168.41.0/24).
1st small question:
After the bootstrap, I recognized that I had mixed up cluster and public
network. :( Is there a way to fix this on a running cluster? Last resort
I would rebuild the cluster. Never the less I can't mount cephfs on a
Linux client using any of the two networks. My linux client is CentOS 7
(latest updates) and has 3 nics, two of them in one of the public and
cluster networks.
I bootstrapped the cluster using the following conf file to have two
networks:
/root/ceph.conf:
[global]
public network = 192.168.41.0/24
cluster network = 192.168.30.0/24
cephadm bootstrap -c /root/ceph.conf --mon-ip 192.168.30.11
I have 2 mons, one running on the bootstrap host (192.168.30.11 /
192.168.41.11) and one (gedaopl01 192.168.30.12/ 192.168.41.12) running
on one of the 3 osds:
[root@gedasvl02 ~]# ceph -s
cluster:
id: dad3c9fa-1ec7-11eb-94d6-005056b703af
health: HEALTH_OK
services:
mon: 2 daemons, quorum gedasvl02,gedaopl01 (age 5h)
mgr: gedasvl02.cspuee(active, since 12h), standbys: gedaopl01.llogef
mds: cephfs:1 {0=cephfs.gedaopl03.prrkll=up:active} 1 up:standby
osd: 3 osds: 3 up (since 11h), 3 in (since 11h)
task status:
scrub status:
mds.cephfs.gedaopl03.prrkll: idle
data:
pools: 3 pools, 81 pgs
objects: 29 objects, 2.2 KiB
usage: 450 GiB used, 407 GiB / 857 GiB avail
pgs: 81 active+clean
[root@gedasvl02 ~]# ceph osd metadata 2 | grep addr
"back_addr":
"[v2:192.168.30.12:6800/3112350288,v1:192.168.30.12:6801/3112350288]",
"front_addr":
"[v2:192.168.41.12:6800/3112350288,v1:192.168.41.12:6801/3112350288]",
"hb_back_addr":
"[v2:192.168.30.12:6802/3112350288,v1:192.168.30.12:6803/3112350288]",
"hb_front_addr":
"[v2:192.168.41.12:6802/3112350288,v1:192.168.41.12:6803/3112350288]",
Now when I try to mount cephfs from the linux client, the mount command
is just stuck and runs into a timeout. I can ping the mon from the
client on both IPs, public (192.168.41.12) and cluster (192.168.30.12)
and I can also see packets coming in on the mon using tcpdump. What
could be wrong here? I'm using fuse-cephfs.
One more question regarding rebuilding the cluster using cephadm, is
there a simple tear-down command? My bootstrap host is a VM so I can use
snapshots, but the nodes I have to clean manually, by removing all pods
and ceph directories.
Best Regards,
Oliver
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx