Re: Failed adding back a node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Adam!

In addition to my earlier question of is there a way of trying a more
targeted upgrade first so we don't risk accidentally breaking the
entire production cluster,

`ceph config dump | grep container_image` shows:

global
 basic     container_image
registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:a193b0de114d19d2efd8750046b5d25da07e2c570e3c4eb4bd93e6de4b90a25a
 *
  mon.mon01
 basic     container_image
registry.redhat.io/rhceph/rhceph-5-rhel8:latest
                                           *
  mon.mon03
 basic     container_image
registry.redhat.io/rhceph/rhceph-5-rhel8:16.2.10-160
                                           *
  mgr
 advanced  mgr/cephadm/container_image_alertmanager
registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6
                                           *
  mgr
 advanced  mgr/cephadm/container_image_base
registry.redhat.io/rhceph/rhceph-5-rhel8
  mgr
 advanced  mgr/cephadm/container_image_grafana
registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:5
                                           *
  mgr
 advanced  mgr/cephadm/container_image_node_exporter
registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6
                                           *
  mgr
 advanced  mgr/cephadm/container_image_prometheus
registry.redhat.io/openshift4/ose-prometheus:v4.6
                                           *
  mgr.mon01
 basic     container_image
registry.redhat.io/rhceph/rhceph-5-rhel8:latest
                                           *
  mgr.mon03
 basic     container_image
registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:a193b0de114d19d2efd8750046b5d25da07e2c570e3c4eb4bd93e6de4b90a25a
 *

and do you think I'd still need to rm that one osd that i successfully
created but not added or would that get "pulled in" when I add the
other 19 osds?

`podman image list shows:
REPOSITORY                                                  TAG
 IMAGE ID      CREATED        SIZE
registry.redhat.io/rhceph/rhceph-5-rhel8                    latest
 1d636b23ab3e  8 weeks ago    1.02 GB
`
so would I be running `ceph orch upgrade start --image  1d636b23ab3e` ?

Thanks again.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux