Re: Failed adding back a node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No, you can't use the image id for hte upgrade command, it has to be the
image name. So it should start, based on what you have,
registry.redhat.io/rhceph/. As for the full name, it depends which image
you want to go with. As for trying this on an OSD first, there is `ceph
orch daemon redeploy <daemon-name> --image <image-name>` you could run on
an OSD with a given image and see if it comes up. I would try the
upgrade before trying to remove the OSD. If it's really only failing
because it can't pull the image, the upgrade should try to make it deploy
with the image passed to the upgrade command, which could fix it as long as
it can pull that image on the host.

On Wed, Mar 27, 2024 at 10:42 PM Alex <mr.alexey@xxxxxxxxx> wrote:

> Hi Adam!
>
> In addition to my earlier question of is there a way of trying a more
> targeted upgrade first so we don't risk accidentally breaking the
> entire production cluster,
>
> `ceph config dump | grep container_image` shows:
>
> global
>  basic     container_image
>
> registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:a193b0de114d19d2efd8750046b5d25da07e2c570e3c4eb4bd93e6de4b90a25a
>  *
>   mon.mon01
>  basic     container_image
> registry.redhat.io/rhceph/rhceph-5-rhel8:latest
>                                            *
>   mon.mon03
>  basic     container_image
> registry.redhat.io/rhceph/rhceph-5-rhel8:16.2.10-160
>                                            *
>   mgr
>  advanced  mgr/cephadm/container_image_alertmanager
> registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6
>                                            *
>   mgr
>  advanced  mgr/cephadm/container_image_base
> registry.redhat.io/rhceph/rhceph-5-rhel8
>   mgr
>  advanced  mgr/cephadm/container_image_grafana
> registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:5
>                                            *
>   mgr
>  advanced  mgr/cephadm/container_image_node_exporter
> registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6
>                                            *
>   mgr
>  advanced  mgr/cephadm/container_image_prometheus
> registry.redhat.io/openshift4/ose-prometheus:v4.6
>                                            *
>   mgr.mon01
>  basic     container_image
> registry.redhat.io/rhceph/rhceph-5-rhel8:latest
>                                            *
>   mgr.mon03
>  basic     container_image
>
> registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:a193b0de114d19d2efd8750046b5d25da07e2c570e3c4eb4bd93e6de4b90a25a
>  *
>
> and do you think I'd still need to rm that one osd that i successfully
> created but not added or would that get "pulled in" when I add the
> other 19 osds?
>
> `podman image list shows:
> REPOSITORY                                                  TAG
>  IMAGE ID      CREATED        SIZE
> registry.redhat.io/rhceph/rhceph-5-rhel8                    latest
>  1d636b23ab3e  8 weeks ago    1.02 GB
> `
> so would I be running `ceph orch upgrade start --image  1d636b23ab3e` ?
>
> Thanks again.
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux