Good morning Eugen, I just found this thread and saw that I had a test image for rgw in the config. After removing the global and the rgw config value everything was instantly fine. Cheers and a happy week Boris Am Di., 16. Jan. 2024 um 10:20 Uhr schrieb Eugen Block <eblock@xxxxxx>: > Hi, > > there have been a few threads with this topic, one of them is this one > [1]. The issue there was that different ceph container images were in > use. Can you check your container versions? If you don't configure a > global image for all ceph daemons, e.g.: > > quincy-1:~ # ceph config set global container_image > quay.io/ceph/ceph:v17.2.7 > > you can end up with different images for different daemons which could > also prevent the orchestrator from properly working. Check the local > images with "podman|docker images" and/or your current configuration: > > quincy-1:~ # ceph config get mon container_image > quincy-1:~ # ceph config get osd container_image > quincy-1:~ # ceph config get mgr container_image > quincy-1:~ # ceph config get mgr mgr/cephadm/container_image_base > > Regards, > Eugen > > [1] https://www.spinics.net/lists/ceph-users/msg77573.html > > Zitat von Boris <bb@xxxxxxxxx>: > > > Happy new year everybody. > > > > I just found out that the orchestrator in one of our clusters is not > doing > > anything. > > > > What I tried until now: > > - disabling / enabling cephadm (no impact) > > - restarting hosts (no impact) > > - starting upgrade to same version (no impact) > > - starting downgrade (no impact) > > - forcefully removing hosts and adding them again (now I have no daemons > > anymore) > > - applying new configurations (no impact) > > > > The orchestrator just does nothing. > > Cluster itself is fine. > > > > I also checked the SSH connecability from all hosts to all hosts ( > > https://docs.ceph.com/en/quincy/cephadm/troubleshooting/#ssh-errors) > > > > The logs always show a message like "took the task" but then nothing > > happens. > > > > Cheers > > Boris > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > -- Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im groüen Saal. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx