Hi Tim, If you can't bring the host back online so that cephadm can remove these services itself, I guess you'll have to clean up the mess by: - removing these services from the cluster (for example with a 'ceph mon remove {mon-id}' for the monitor) - forcing their removal from the orchestrator with the --force option on the commands 'ceph orch daemon rm <names>' and 'ceph orch host rm <hostname>'. If the --force option doesn't help, then looking into/editing/removing ceph-config keys like 'mgr/cephadm/inventory' and 'mgr/cephadm/host.ceph07.internal.mousetech.com' that 'ceph config-key dump' output shows might help. Regards, Frédéric. ----- Le 25 Fév 25, à 16:42, Tim Holloway timh@xxxxxxxxxxxxx a écrit : > Ack. Another fine mess. > > I was trying to clean things up and the process of tossing around OSD's > kept getting me reports of slow responses and hanging PG operations. > > This is Ceph Pacific, by the way. > > I found a deprecated server that claimed to have an OSD even though it > didn't show in either "ceph osd tree" or the dashboard OSD list. I > suspect that a lot of the grief came from it attempting to use > resources that weren't always seen as resources. > > I shut down the server's OSD (removed the daemon using ceph orch), then > foolishly deleted the server from the inventory without doing a drain > first. > > Now cephadmin hates me (key not found), and there are still an MDS and > MON listed as ceph orch ls daemons even after I powered the host off. > > I cannot do a ceph orch daemon delete because there's no longer an IP > address available to the daemon delete, and I cannot clear the > cephadmin queue: > > [ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: > 'ceph07.internal.mousetech.com' > > Any suggestions? > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx