Re: cephadm/podman :: upgrade to pacific stuck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You would normally tell cephadm to deploy another mgr with 'ceph orch
apply mgr 2'.  In this case, the default placement policy for mgrs is
already either 2 or 3, though--the problem is that you only have 1
host in your cluster, and cephadm currently doesn't handle placing
multiple mgrs on a single host (the ports will conflict).  And upgrade
needs a standby.  So.. a single-host cephadm cluster won't upgrade
itself.

You can get around this by manually tweaking the mgr container.. vi
/var/lib/ceph/$fsid/mgr.$whatever/unit.run and change the container
image path on the docker or podman run line to be ceph/ceph:v16.2.0,
and then systemctl restart ceph-$fsid@mgr.$whatever

Supporting automated single-node upgrades is high on the list.. we
hope to have it fixed soon.

s

On Thu, Apr 1, 2021 at 1:24 PM Adrian Sevcenco <Adrian.Sevcenco@xxxxxxx> wrote:
>
> On 4/1/21 8:19 PM, Anthony D'Atri wrote:
> > I think what it’s saying is that it wants for more than one mgr daemon to be provisioned, so that it can failover
> unfortunately it is not allowed as the port usage is clashing ...
> i found out the name of the daemon by grepping the ps output (it would be nice a ceph orch daemon ls)
> and i stopped it .. but than the message was :
> cluster:
> id:     d9f4c810-8270-11eb-97a7-faa3b09dcf67
> health: HEALTH_WARN
> no active mgr
> Upgrade: Need standby mgr daemon
>
> so, it seems that there is a specific requirement of a state named "standby" for mgr daemon
>
> then i tried to start it again with:
> ceph orch daemon start <the same name used for stop>
>
> but the command is stuck ...
>
> i tried to get the ceph:v16.2 image and
> ceph orch daemon redeploy mgr ceph:v16.2.0
>
> but it also is stuck?
>
> so, what can i do? is there anything beside delete everything and start from scratch?
>
> Thank you!
> Adrian
>
>
> > when the primary is restarted.  I suspect you would then run into the same thing with the mon.  All sorts of things
> > tend to crop up on a cluster this minimal.
> >
> >
> >> On Apr 1, 2021, at 10:15 AM, Adrian Sevcenco <Adrian.Sevcenco@xxxxxxx> wrote:
> >>
> >> Hi! I have a single machine ceph installation and after trying to update to pacific the upgrade is stuck with:
> >>
> >> ceph -s cluster: id:     d9f4c810-8270-11eb-97a7-faa3b09dcf67 health: HEALTH_WARN Upgrade: Need standby mgr daemon
> >>
> >> services: mon: 1 daemons, quorum sev.spacescience.ro (age 3w) mgr: sev.spacescience.ro.wpozds(active, since 2w)
> >> mds: sev-ceph:1 {0=sev-ceph.sev.vmvwrm=up:active} osd: 2 osds: 2 up (since 2w), 2 in (since 2w)
> >>
> >> data: pools:   4 pools, 194 pgs objects: 32 objects, 8.4 KiB usage:   2.0 GiB used, 930 GiB / 932 GiB avail pgs:
> >> 194 active+clean
> >>
> >> progress: Upgrade to docker.io/ceph/ceph:v16.2.0 (0s) [............................]
> >>
> >> How can i put the mgr on standby? so far i did not find anything relevant..
> >>
> >> Thanks a lot! Adrian
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux