Re: cephadm/podman :: upgrade to pacific stuck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi! (and thanks for taking your time to answer my email :) )

On 4/8/21 1:18 AM, Sage Weil wrote:
You would normally tell cephadm to deploy another mgr with 'ceph orch
apply mgr 2'.  In this case, the default placement policy for mgrs is
already either 2 or 3, though--the problem is that you only have 1
host in your cluster, and cephadm currently doesn't handle placing
i had the idea of starting a vm temporary and deploy there a temporary mgr
but unfortunately it seems that after the latest bios update the bios
setting were reset with a default for virtualization set to off :(
and i do not know when i can get to my office again

multiple mgrs on a single host (the ports will conflict).  And upgrade
needs a standby.  So.. a single-host cephadm cluster won't upgrade
itself.
yup.. i searched for a way to define custom ports for dashboard so to start
a second mgr with non-clashing ports, but i did not found a way to do it

You can get around this by manually tweaking the mgr container.. vi
/var/lib/ceph/$fsid/mgr.$whatever/unit.run and change the container
image path on the docker or podman run line to be ceph/ceph:v16.2.0,
and then systemctl restart ceph-$fsid@mgr.$whatever
i wanted to say that i already tried, but when doing grep to report
the things i noticed that in unit.run there are _2_ specifications for image:
there is an
-e CONTAINER_IMAGE=docker.io/ceph/ceph:v16.2.0  (this is what i tried to change)
then towards the end of line after the -v specification for /etc/ceph/ceph.conf
i seen an alone docker.io/ceph/ceph:v15

after changing also this part i can see in the podman ps the mgr container being started
with docker.io/ceph/ceph:v16.2.0
so, SUCCESS :)

But why there is a need for two specifications for the same thing?
(-e CONTAINER_IMAGE then again the single image)

Thanks a lot!!
Adrian


Supporting automated single-node upgrades is high on the list.. we
hope to have it fixed soon.

s

On Thu, Apr 1, 2021 at 1:24 PM Adrian Sevcenco <Adrian.Sevcenco@xxxxxxx> wrote:

On 4/1/21 8:19 PM, Anthony D'Atri wrote:
I think what it’s saying is that it wants for more than one mgr daemon to be provisioned, so that it can failover
unfortunately it is not allowed as the port usage is clashing ...
i found out the name of the daemon by grepping the ps output (it would be nice a ceph orch daemon ls)
and i stopped it .. but than the message was :
cluster:
id:     d9f4c810-8270-11eb-97a7-faa3b09dcf67
health: HEALTH_WARN
no active mgr
Upgrade: Need standby mgr daemon

so, it seems that there is a specific requirement of a state named "standby" for mgr daemon

then i tried to start it again with:
ceph orch daemon start <the same name used for stop>

but the command is stuck ...

i tried to get the ceph:v16.2 image and
ceph orch daemon redeploy mgr ceph:v16.2.0

but it also is stuck?

so, what can i do? is there anything beside delete everything and start from scratch?

Thank you!
Adrian


when the primary is restarted.  I suspect you would then run into the same thing with the mon.  All sorts of things
tend to crop up on a cluster this minimal.


On Apr 1, 2021, at 10:15 AM, Adrian Sevcenco <Adrian.Sevcenco@xxxxxxx> wrote:

Hi! I have a single machine ceph installation and after trying to update to pacific the upgrade is stuck with:

ceph -s cluster: id:     d9f4c810-8270-11eb-97a7-faa3b09dcf67 health: HEALTH_WARN Upgrade: Need standby mgr daemon

services: mon: 1 daemons, quorum sev.spacescience.ro (age 3w) mgr: sev.spacescience.ro.wpozds(active, since 2w)
mds: sev-ceph:1 {0=sev-ceph.sev.vmvwrm=up:active} osd: 2 osds: 2 up (since 2w), 2 in (since 2w)

data: pools:   4 pools, 194 pgs objects: 32 objects, 8.4 KiB usage:   2.0 GiB used, 930 GiB / 932 GiB avail pgs:
194 active+clean

progress: Upgrade to docker.io/ceph/ceph:v16.2.0 (0s) [............................]

How can i put the mgr on standby? so far i did not find anything relevant..

Thanks a lot! Adrian

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


--
----------------------------------------------
Adrian Sevcenco, Ph.D.                       |
Institute of Space Science - ISS, Romania    |
adrian.sevcenco at {cern.ch,spacescience.ro} |
----------------------------------------------

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux