OK. I deleted the questionable stuff with this command: dnf erase ceph-mgr-modules-core-16.2.15-1.el9s.noarch ceph-mgr- diskprediction-local-16.2.15-1.el9s.noarch ceph-mgr-16.2.15- 1.el9s.x86_64 ceph-mds-16.2.15-1.el9s.x86_64 ceph-mon-16.2.15- 1.el9s.x86_64 That left these two: centos-release-ceph-pacific-1.0-2.el9.noarch cephadm-16.2.14-2.el9s.noarch At that point, I couldn't run cephadm because the /etc/ceph directory was deleted as well by dnf. I copied it back in from the master copy but no containers were running. A reboot brought everything back up. So I'm optimistic that that machine is now clean (and has nearly 500MB of newly-freed space from the deleted packages!) One of the deleted items was the "ceph" meta-package. I'm thinking that I installed this and it's what pulled in the others. The final machine is operational and I'm going to leave it, but it does show 1 quirk. Dashboard and osd tree show its OSD as up/running, but "ceph orch ps" shows it as "stopped". My guess is that ceph orch is looking for the container OSD and doesn't notice the legacy OSD. Thanks again for all the help! Tim On Tue, 2024-07-16 at 06:38 +0000, Eugen Block wrote: > Do you have more ceph packages installed than just cephadm? If you > have ceph-osd packages (or ceph-mon, ceph-mds etc.), I would remove > them and clean up the directories properly. To me it looks like a > mixup of "traditional" package based installation and cephadm > deployment. Only you can tell how and why, but it's more important > to > clean that up and keep it consistent. You should keep the cephadm > and > optional the ceph-common package, but the rest isn't required to run > a > cephadm cluster. > (history deleted. See earlier copies) _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx