Hi Malte, Did you try: ceph mgr module disable cephadm; ceph mgr module enable cephadm --force; Can you see any error in the mgr logs? for this just try to find the mgr systemd service by running in the node where you active mgr is running: > systemctl | grep mgr then: > journalct -f -u <long_name_of_your_mgr_systemd_service> On Thu, Oct 17, 2024 at 10:37 AM Malte Stroem <malte.stroem@xxxxxxxxx> wrote: > Hello, > > I am still struggling here and do not know the root cause of this issue. > > Searching the list I found lots of people who had the same or a similar > problem the last years. > > However there is no solution four our cluster. > > Disabling and enabling the cephadm module does not work. There are no > error messages. When we run "ceph orch..." we get the error message: > > Error ENOENT: No orchestrator configured (try `ceph orch set backend`) > > But every single cephadm command works! > > cephadm ls for example. > > Stopping and restarting the MGRs did not help. Removing the .asok files > did not help. > > I think of stopping both MGRs and trying to deploy a new MGR like this: > > > https://docs.ceph.com/en/latest/cephadm/troubleshooting/#manually-deploying-a-manager-daemon > > How could I find the root cause? Is the cephadm somehow broken? > > What about the cephadm files under /var/lib/ceph/fsid? Can I replace the > latest? > > Best, > Malte > > On 16.10.24 14:54, Malte Stroem wrote: > > Hi Laimis, > > > > that did not work. Still ceph orch does not work. > > > > Best, > > Malte > > > > On 16.10.24 14:12, Malte Stroem wrote: > >> Thank you, Laimis. > >> > >> And you got the same error message? That's strange. > >> > >> In the mean time I try to check for clients connected. No Kubernetes > >> and CephFS, but RGWs. > >> > >> Best, > >> Malte > >> > >> On 16.10.24 14:01, Laimis Juzeliūnas wrote: > >>> Hi Malte, > >>> > >>> We have faced this recently when upgrading to Squid from latest Reef. > >>> As a temporary workaround we disabled the balancer with ‘ceph > >>> balancer off’ and restarted mgr daemons. > >>> We are suspecting older clients (from Kubernetes RBD mounts as well > >>> as CephFS mounts) on servers with incompatible client versions but > >>> are yet to dig through it. > >>> > >>> Best, > >>> Laimis J. > >>> > >>>> On 16 Oct 2024, at 14:57, Malte Stroem <malte.stroem@xxxxxxxxx> > wrote: > >>>> > >>>> Error ENOENT: No orchestrator configured (try `ceph orch set backend`) > >>> > >>> _______________________________________________ > >>> ceph-users mailing list -- ceph-users@xxxxxxx > >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx > >> _______________________________________________ > >> ceph-users mailing list -- ceph-users@xxxxxxx > >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx