On 4/1/21 8:19 PM, Anthony D'Atri wrote:
I think what it’s saying is that it wants for more than one mgr daemon to be provisioned, so that it can failover
unfortunately it is not allowed as the port usage is clashing ... i found out the name of the daemon by grepping the ps output (it would be nice a ceph orch daemon ls) and i stopped it .. but than the message was : cluster: id: d9f4c810-8270-11eb-97a7-faa3b09dcf67 health: HEALTH_WARN no active mgr Upgrade: Need standby mgr daemon so, it seems that there is a specific requirement of a state named "standby" for mgr daemon then i tried to start it again with: ceph orch daemon start <the same name used for stop> but the command is stuck ... i tried to get the ceph:v16.2 image and ceph orch daemon redeploy mgr ceph:v16.2.0 but it also is stuck? so, what can i do? is there anything beside delete everything and start from scratch? Thank you! Adrian
when the primary is restarted. I suspect you would then run into the same thing with the mon. All sorts of things tend to crop up on a cluster this minimal.On Apr 1, 2021, at 10:15 AM, Adrian Sevcenco <Adrian.Sevcenco@xxxxxxx> wrote: Hi! I have a single machine ceph installation and after trying to update to pacific the upgrade is stuck with: ceph -s cluster: id: d9f4c810-8270-11eb-97a7-faa3b09dcf67 health: HEALTH_WARN Upgrade: Need standby mgr daemonservices: mon: 1 daemons, quorum sev.spacescience.ro (age 3w) mgr: sev.spacescience.ro.wpozds(active, since 2w) mds: sev-ceph:1 {0=sev-ceph.sev.vmvwrm=up:active} osd: 2 osds: 2 up (since 2w), 2 in (since 2w)data: pools: 4 pools, 194 pgs objects: 32 objects, 8.4 KiB usage: 2.0 GiB used, 930 GiB / 932 GiB avail pgs: 194 active+clean progress: Upgrade to docker.io/ceph/ceph:v16.2.0 (0s) [............................] How can i put the mgr on standby? so far i did not find anything relevant.. Thanks a lot! Adrian
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx