Hi,
When installing Nautilus on a five node cluster, we tried to install one node first and then the remaining four nodes. After that we saw that the fifth node is out of quorum and we found that the fsid was different in 5th node. When we replaced the ceph.conf file from the four nodes to the fifth node and restart the ceph service still we are unable to make the fifth node enter the quorum.
# ceph -s
cluster:
id: 92e8e879-041f-49fd-a26a-027814e0255b
health: HEALTH_WARN
1/5 mons down, quorum cn1,cn2,cn3,cn4
services:
mon: 5 daemons, quorum cn1,cn2,cn3,cn4 (age 44m), out of quorum: cn5
cluster:
id: 92e8e879-041f-49fd-a26a-027814e0255b
health: HEALTH_WARN
1/5 mons down, quorum cn1,cn2,cn3,cn4
services:
mon: 5 daemons, quorum cn1,cn2,cn3,cn4 (age 44m), out of quorum: cn5
But when we find the monmap we see that the monmap is proper in all the five nodes.
# monmaptool --print /tmp/monmap
monmaptool: monmap file /tmp/monmap
epoch 2
fsid 92e8e879-041f-49fd-a26a-027814e0255b
last_changed 2020-01-13 05:47:12.846861
created 2020-01-10 16:19:21.340371
min_mon_release 14 (nautilus)
0: [v2:10.50.11.11:3300/0,v1:10.50.11.11:6789/0] mon.cn1
1: [v2:10.50.11.12:3300/0,v1:10.50.11.12:6789/0] mon.cn2
2: [v2:10.50.11.13:3300/0,v1:10.50.11.13:6789/0] mon.cn3
3: [v2:10.50.11.14:3300/0,v1:10.50.11.14:6789/0] mon.cn4
4: [v2:10.50.11.15:3300/0,v1:10.50.11.15:6789/0] mon.cn5
monmaptool: monmap file /tmp/monmap
epoch 2
fsid 92e8e879-041f-49fd-a26a-027814e0255b
last_changed 2020-01-13 05:47:12.846861
created 2020-01-10 16:19:21.340371
min_mon_release 14 (nautilus)
0: [v2:10.50.11.11:3300/0,v1:10.50.11.11:6789/0] mon.cn1
1: [v2:10.50.11.12:3300/0,v1:10.50.11.12:6789/0] mon.cn2
2: [v2:10.50.11.13:3300/0,v1:10.50.11.13:6789/0] mon.cn3
3: [v2:10.50.11.14:3300/0,v1:10.50.11.14:6789/0] mon.cn4
4: [v2:10.50.11.15:3300/0,v1:10.50.11.15:6789/0] mon.cn5
Regards,
Sridhar S
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com