Hello Carl, What do you mean by powered off? Is the OS booted up and online? Was your disk activity for the OS disk or the disks to which OSDs are deployed? If your OSs are online, all of the daemons should come online automatically. Sometimes when my OSDs are not coming online and assuming rest of the daemons like monitors are up, I'd simply run "# systemctl start ceph.target" which can trigger start of all the containerized daemons on that storage node. Thanks ________________________________ From: Carl J Taylor <cjtaylor@xxxxxxxxx> Sent: Saturday, January 27, 2024 4:18:58 PM To: ceph-users@xxxxxxx <ceph-users@xxxxxxx> Subject: Quite important: How do I restart a small cluster using cephadm at 18.2.1 Hi, Due to idiotic behaviour on my part I made a mistake while replacing some disks in our data centre and our cluster ended up all powered off! I have been using ceph for many years (since firefly) but only recently upgraded to reef and moved to the cephadm / podman setup. I am trying to figure out how to get it all started up again. I am not very familiar with docker at all. I can see the bootstrap option but no "recover" option. It is a small cluster with 3 nodes and two 3TB/4TB disks in each node. I have had a look at https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds but wonder if cephadm does this itself automagically? Help please, I don't want to lose my data! Many thanks Carl. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx