Re: Cephadm - Adding host to migrated cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry, didn't include the cephadm log on the osd node:

2022-10-17 03:38:45,571 7f5e65b33b80 DEBUG /usr/bin/podman: ceph version
17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy (stable)
2022-10-17 03:38:45,673 7f5e65b33b80 DEBUG systemctl: enabled
2022-10-17 03:38:45,691 7f5e65b33b80 DEBUG systemctl: failed
2022-10-17 03:38:45,814 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd-37"
2022-10-17 03:38:45,957 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd.37"
2022-10-17 03:38:45,981 7f5e65b33b80 DEBUG systemctl: enabled
2022-10-17 03:38:45,993 7f5e65b33b80 DEBUG systemctl: failed
2022-10-17 03:38:46,129 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd-50"
2022-10-17 03:38:46,275 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd.50"
2022-10-17 03:38:46,297 7f5e65b33b80 DEBUG systemctl: enabled
2022-10-17 03:38:46,314 7f5e65b33b80 DEBUG systemctl: failed
2022-10-17 03:38:46,452 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd-51"
2022-10-17 03:38:46,591 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd.51"
2022-10-17 03:38:46,615 7f5e65b33b80 DEBUG systemctl: enabled
2022-10-17 03:38:46,627 7f5e65b33b80 DEBUG systemctl: failed
2022-10-17 03:38:46,764 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd-52"
2022-10-17 03:38:46,912 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd.52"
2022-10-17 03:38:46,934 7f5e65b33b80 DEBUG systemctl: enabled
2022-10-17 03:38:46,950 7f5e65b33b80 DEBUG systemctl: failed
2022-10-17 03:38:47,088 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd-53"
2022-10-17 03:38:47,234 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd.53"
2022-10-17 03:38:47,256 7f5e65b33b80 DEBUG systemctl: enabled
2022-10-17 03:38:47,273 7f5e65b33b80 DEBUG systemctl: failed
2022-10-17 03:38:47,423 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd-54"
2022-10-17 03:38:47,561 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd.54"
2022-10-17 03:38:47,581 7f5e65b33b80 DEBUG systemctl: enabled
2022-10-17 03:38:47,591 7f5e65b33b80 DEBUG systemctl: failed
2022-10-17 03:38:47,727 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd-55"
2022-10-17 03:38:47,869 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd.55"
2022-10-17 03:38:47,891 7f5e65b33b80 DEBUG systemctl: enabled
2022-10-17 03:38:47,905 7f5e65b33b80 DEBUG systemctl: failed
2022-10-17 03:38:48,039 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd-56"
2022-10-17 03:38:48,185 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd.56"
2022-10-17 03:38:48,209 7f5e65b33b80 DEBUG systemctl: enabled
2022-10-17 03:38:48,226 7f5e65b33b80 DEBUG systemctl: failed
2022-10-17 03:38:48,368 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd-57"
2022-10-17 03:38:48,523 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd.57"
2022-10-17 03:38:48,547 7f5e65b33b80 DEBUG systemctl: enabled
2022-10-17 03:38:48,559 7f5e65b33b80 DEBUG systemctl: failed
2022-10-17 03:38:48,690 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd-58"
2022-10-17 03:38:48,833 7f5e65b33b80 DEBUG /usr/bin/podman: Error:
inspecting object: no such object:
"ceph-c5a1e7b2-27cd-4a68-8279-76355e4f49ad-osd.58"

I can confirm the containers are not running.  One thing to note, after it
failed the first time, the daemons stay persistent in the cephadm dashboard
until I delete them manually from the node.  Its like the container doesn't
spin up on the node for each of the disks.

-Brent

-----Original Message-----
From: Eugen Block <eblock@xxxxxx> 
Sent: Monday, October 17, 2022 12:52 PM
To: ceph-users@xxxxxxx
Subject:  Re: Cephadm - Adding host to migrated cluster

Does the cephadm.log on that node reveal anything useful? What about the
(active) mgr log?

Zitat von Brent Kennedy <bkennedy@xxxxxxxxxx>:

> Greetings everyone,
>
>
>
> We recently moved a ceph-ansible cluster running pacific on centos 8 
> to centos 8 stream and then upgraded to quincy using cephadm after 
> converting to cephadm.  Everything with the transition worked but 
> recently we decided to add another node to the cluster with 10 more 
> drives.  We were able to go to the web interface and add the host ( 
> with the IP and name ), which spun up the basic management containers 
> on the new node.  We then went to the OSD section to add the drives 
> which were showing as available.  They were all recognized, so the 
> drives were added via the web console.  Cephadm spun up the OSDs and 
> that's where things are stuck.  The OSDs show up in the cluster but 
> are out now.  They came up but were then marked down and later out.  
> We purged them then zapped the drives and after about 10 minutes, 
> cephadm had added them back automatically.  It then did the same 
> thing, showed them up, then down and put them out.  When I look at 
> "ceph osd tree", it shows the drives but they don't show up under any 
> host ( they are on host osdserver6 ).  I am trying to figure out why 
> they are not being put under a host since the host server was added to 
> cephadm and the server install checks with cephadm were good.  The
maintenance containers are running on the host, no issues.  Any ideas would
be greatly appreciated.
>
>
>
> -16          36.38199      host osdserver5
>
> 20    ssd    3.63820          osd.20              up   1.00000  1.00000
>
> 22    ssd    3.63820          osd.22              up   1.00000  1.00000
>
> 23    ssd    3.63820          osd.23              up   1.00000  1.00000
>
> 24    ssd    3.63820          osd.24              up   1.00000  1.00000
>
> 44    ssd    3.63820          osd.44              up   1.00000  1.00000
>
> 45    ssd    3.63820          osd.45              up   1.00000  1.00000
>
> 46    ssd    3.63820          osd.46              up   1.00000  1.00000
>
> 47    ssd    3.63820          osd.47              up   1.00000  1.00000
>
> 48    ssd    3.63820          osd.48              up   1.00000  1.00000
>
> 49    ssd    3.63820          osd.49              up   1.00000  1.00000
>
> 37                 0  osd.37                    down   1.00000  1.00000
>
> 50                 0  osd.50                    down   1.00000  1.00000
>
> 51                 0  osd.51                    down   1.00000  1.00000
>
> 52                 0  osd.52                    down   1.00000  1.00000
>
> 53                 0  osd.53                    down   1.00000  1.00000
>
> 54                 0  osd.54                    down   1.00000  1.00000
>
> 55                 0  osd.55                    down   1.00000  1.00000
>
> 56                 0  osd.56                    down   1.00000  1.00000
>
> 57                 0  osd.57                    down   1.00000  1.00000
>
> 58                 0  osd.58                    down   1.00000  1.00000
>
>
>
>
>
> Regards,
>
> -Brent
>
>
>
> Existing Clusters:
>
> Test: Quincy 17.2.3 ( all virtual on nvme )
>
> US Production(HDD): Octopus 15.2.16 with 11 osd servers, 3 mons, 4 
> gateways,
> 2 iscsi gateways
>
> UK Production(HDD): Nautilus 14.2.22 with 18 osd servers, 3 mons, 4 
> gateways, 2 iscsi gateways
>
> US Production(SSD): Quincy 17.2.3 Cephadm with 6 osd servers, 5 mons, 
> 4 gateways, 2 iscsi gateways
>
> UK Production(SSD): Quincy 17.2.3 with 6 osd servers, 5 mons, 4 
> gateways
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux