Thanks for the info, Robert. Glad to hear it's working now. Regarding the Ceph Tracker website, I just checked and the sign-up page seems to be working fine (https://tracker.ceph.com/account/register) in case you still want to get an account there. Kind Regards, Ernesto On Fri, Jun 11, 2021 at 5:15 AM Robert W. Eckert <rob@xxxxxxxxxxxxxxx> wrote: > Hi Ernesto – I couldn’t register for an account there, it was giving me a > 503, but I think the issue is the deployed container. I managed to clean it > up, but not 100% sure of the cause – I think it is the referenced, > container – all of the unit.run files reference > docker.io/ceph/ceph@sha256:54e95ae1e11404157d7b329d0bef866ebbb214b195a009e87aae4eba9d282949 > but I don’t see that sha digest against the ceph/ceph:v16.2.4 in docker. > > To clean it up I did the following (assume servers are named a,b,c) > > On each server, I ran podman pull docker.io/ceph/ceph:v16.2.4 > > > > On server a, which was running the manager, I did > > > > ceph orch apply mon --placement='a.domain’ > > ceph orch apply mgr --placement='a.domain’ > > > > > > I didn’t expect any immediate miracles, but just wanted to isolate the > issue. However when I did this, the dashboard started working again. I > then redeployed mgr and mon to all 3 servers, and things are back up > > > > I then did applied the mon and mgr to all servers > > ceph orch apply mon --placement='a.domain,b.domain, c.domain’ > > ceph orch apply mgr --placement='a.domain,b.domain, c.domain’ > > > > Things still worked, > > So I removed a from the servers (to reset it) > > ceph orch apply mon --placement='b.domain, c.domain’ > > ceph orch apply mgr --placement='b.domain, c.domain’ > > > > Finally to get all 3 back up: > > ceph orch apply mon --placement='a.domain,b.domain, c.domain’ > > ceph orch apply mgr --placement='a.domain,b.domain, c.domain’ > > > > > > And I am up and running, > > I am thinking the pull of the docker.io:/ceph/ceph:v16.2.4 is what did > it- because there was a module downloaded on each server. So I am not 100% > sure the sha digest tag matches 16.2.4, but it is working again. > > > > Thanks, > > Rob > > > > p.s. I do have an extracted image of the container before I did all of > this if that would help. > > > > > > *From:* Ernesto Puerta <epuertat@xxxxxxxxxx> > *Sent:* Thursday, June 10, 2021 2:44 PM > *To:* Robert W. Eckert <rob@xxxxxxxxxxxxxxx> > *Cc:* ceph-users <ceph-users@xxxxxxx> > *Subject:* Re: Error on Ceph Dashboard > > > > Hi Robert, > > > > I just launched a 16.2.4 cluster and I don't reproduce that error. Could > please file a tracker in > https://tracker.ceph.com/projects/dashboard/issues/new and attach the mgr > logs and cluster details (e.g.: number of mgrs)? > > > > Thanks! > > > Kind Regards, > > Ernesto > > > > > > On Thu, Jun 10, 2021 at 4:05 AM Robert W. Eckert <rob@xxxxxxxxxxxxxxx> > wrote: > > Hi - this just started happening in the past few days using Ceph Pacific > 16.2.4 via cephadmin (Podman containers) > The dashboard is returning > > No active ceph-mgr instance is currently running the dashboard. A failover > may be in progress. Retrying in 5 seconds... > > And ceph status returns > > cluster: > id: fe3a7cb0-69ca-11eb-8d45-c86000d08867 > health: HEALTH_WARN > Module 'dashboard' has failed dependency: cannot import name > 'AuthManager' > clock skew detected on mon.cube > > services: > mon: 3 daemons, quorum story,cube,rhel1 (age 46h) > mgr: cube.tvlgnp(active, since 47h), standbys: rhel1.zpzsjc, > story.gffann > mds: 2/2 daemons up, 1 standby > osd: 13 osds: 13 up (since 46h), 13 in (since 46h) > rgw: 3 daemons active (3 hosts, 1 zones) > > data: > volumes: 1/1 healthy > pools: 11 pools, 497 pgs > objects: 1.50M objects, 2.1 TiB > usage: 6.2 TiB used, 32 TiB / 38 TiB avail > pgs: 497 active+clean > > io: > client: 255 B/s rd, 2.7 KiB/s wr, 0 op/s rd, 0 op/s wr > > The only thing that has happened on the cluster was one of the servers was > rebooted. No configuration changes were performed > > Any suggestions? > > Thanks, > rob > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx