Le 29/11/2023 à 11:44:57-0500, Adam King a écrit Hi, > I think I remember a bug that happened when there was a small mismatch > between the cephadm version being used for bootstrapping and the container. > In this case, the cephadm binary used for bootstrap knows about the > ceph-exporter service and the container image being used does not. The > ceph-exporter was removed from quincy between 17.2.6 and 17.2.7 so I'd > guess the cephadm binary here is a bit older and it's pulling hte 17.2.7 > image. For now, I'd say just workaround this by running bootstrap with > `--skip-monitoring-stack` flag. If you want the other services in the > monitoring stack after bootstrap you can just run `ceph orch apply > <service>` for services alertmanager, prometheus, node-exporter, and > grafana and it would get you in the same spot as if you didn't provide the > flag and weren't hitting the issue. > > For an extra note, this failed bootstrap might be leaving things around > that could cause subsequent bootstraps to fail. If you run `cephadm ls` and > see things listed, you can grab the fsid from the output of that command > and run `cephadm rm-cluster --force --fsid <fsid>` to clean up the env > before bootstrapping again. I run into the same problem few weeks ago. My mistake was to use cepahdm 17.2.6 and trying to install the 17.2.7 version because I forget to remove the cephadm 17.2.6 on my server (currently I try to fit the install of ceph «into» our puppet config. Soon I remove the old version of cephadm and install the 17.2.7 version everything work fine again. Regards. -- Albert SHIH 🦫 🐸 France Heure locale/Local time: mer. 29 nov. 2023 22:06:35 CET _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx