Does the cephadm.log on that node reveal anything useful? What about
the (active) mgr log?
Zitat von Brent Kennedy <bkennedy@xxxxxxxxxx>:
Greetings everyone,
We recently moved a ceph-ansible cluster running pacific on centos 8 to
centos 8 stream and then upgraded to quincy using cephadm after converting
to cephadm. Everything with the transition worked but recently we decided
to add another node to the cluster with 10 more drives. We were able to go
to the web interface and add the host ( with the IP and name ), which spun
up the basic management containers on the new node. We then went to the OSD
section to add the drives which were showing as available. They were all
recognized, so the drives were added via the web console. Cephadm spun up
the OSDs and that's where things are stuck. The OSDs show up in the cluster
but are out now. They came up but were then marked down and later out. We
purged them then zapped the drives and after about 10 minutes, cephadm had
added them back automatically. It then did the same thing, showed them up,
then down and put them out. When I look at "ceph osd tree", it shows the
drives but they don't show up under any host ( they are on host osdserver6
). I am trying to figure out why they are not being put under a host since
the host server was added to cephadm and the server install checks with
cephadm were good. The maintenance containers are running on the host, no
issues. Any ideas would be greatly appreciated.
-16 36.38199 host osdserver5
20 ssd 3.63820 osd.20 up 1.00000 1.00000
22 ssd 3.63820 osd.22 up 1.00000 1.00000
23 ssd 3.63820 osd.23 up 1.00000 1.00000
24 ssd 3.63820 osd.24 up 1.00000 1.00000
44 ssd 3.63820 osd.44 up 1.00000 1.00000
45 ssd 3.63820 osd.45 up 1.00000 1.00000
46 ssd 3.63820 osd.46 up 1.00000 1.00000
47 ssd 3.63820 osd.47 up 1.00000 1.00000
48 ssd 3.63820 osd.48 up 1.00000 1.00000
49 ssd 3.63820 osd.49 up 1.00000 1.00000
37 0 osd.37 down 1.00000 1.00000
50 0 osd.50 down 1.00000 1.00000
51 0 osd.51 down 1.00000 1.00000
52 0 osd.52 down 1.00000 1.00000
53 0 osd.53 down 1.00000 1.00000
54 0 osd.54 down 1.00000 1.00000
55 0 osd.55 down 1.00000 1.00000
56 0 osd.56 down 1.00000 1.00000
57 0 osd.57 down 1.00000 1.00000
58 0 osd.58 down 1.00000 1.00000
Regards,
-Brent
Existing Clusters:
Test: Quincy 17.2.3 ( all virtual on nvme )
US Production(HDD): Octopus 15.2.16 with 11 osd servers, 3 mons, 4 gateways,
2 iscsi gateways
UK Production(HDD): Nautilus 14.2.22 with 18 osd servers, 3 mons, 4
gateways, 2 iscsi gateways
US Production(SSD): Quincy 17.2.3 Cephadm with 6 osd servers, 5 mons, 4
gateways, 2 iscsi gateways
UK Production(SSD): Quincy 17.2.3 with 6 osd servers, 5 mons, 4 gateways
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx