====================================== Once we upgrade all daemons to the same image, the mon and mgr on bootstrap node is redeployed and the auth_allow_insecure_global_id_reclaim warning can be observed uniformly across all mons. ceph orch upgrade start quay.io/ceph/ceph:v16.2.7 [root@hcictrl01 stack_orchestrator]# ceph orch upgrade status { "target_image": null, "in_progress": false, "services_complete": [], "progress": null, "message": "" } [root@hcictrl01 stack_orchestrator]# ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID crash.hcictrl01 hcictrl01 running (40m) 4m ago 40m 6975k - 16.2.7 231fd40524c4 f6f866f4be92 crash.hcictrl02 hcictrl02 running (40m) 4m ago 40m 6975k - 16.2.7 231fd40524c4 1cb62e191c07 crash.hcictrl03 hcictrl03 running (39m) 4m ago 39m 6979k - 16.2.7 231fd40524c4 3e03f99065c0 mds.cephfs.hcictrl01.vuamjy hcictrl01 running (35m) 4m ago 35m 15.5M - 16.2.7 231fd40524c4 9b3aeab68115 mds.cephfs.hcictrl02.myohpi hcictrl02 running (35m) 4m ago 35m 16.7M - 16.2.7 231fd40524c4 5cded1208028 mds.cephfs.hcictrl03.jziler hcictrl03 running (35m) 4m ago 35m 15.4M - 16.2.7 231fd40524c4 94dccd01a123 mgr.hcictrl01.ljtznv hcictrl01 *:8443 running (15m) 4m ago 42m 387M - 16.2.7 231fd40524c4 448c6fab2b98 mgr.hcictrl02.izfnvh hcictrl02 *:8443 running (40m) 4m ago 40m 448M - 16.2.7 231fd40524c4 acd435a8b6b1 mgr.hcictrl03.eekrgo hcictrl03 *:8443 running (39m) 4m ago 39m 384M - 16.2.7 231fd40524c4 3c241b35a3fe mon.hcictrl01 hcictrl01 running (15m) 4m ago 42m 123M 2048M 16.2.7 231fd40524c4 b2a8bdbd6983 mon.hcictrl02 hcictrl02 running (40m) 4m ago 40m 135M 2048M 16.2.7 231fd40524c4 4941ee015e75 mon.hcictrl03 hcictrl03 running (39m) 4m ago 39m 128M 2048M 16.2.7 231fd40524c4 150cb8e9d25d osd.0 hcictrl02 running (39m) 4m ago 39m 66.4M 1536M 16.2.7 231fd40524c4 9dc98cc6ba3d osd.1 hcictrl02 running (39m) 4m ago 39m 69.2M 1536M 16.2.7 231fd40524c4 62caab356b02 osd.2 hcictrl02 running (38m) 4m ago 38m 88.7M 1536M 16.2.7 231fd40524c4 38ea17598fa5 osd.3 hcictrl01 running (38m) 4m ago 38m 72.1M 1536M 16.2.7 231fd40524c4 03284b111258 osd.4 hcictrl01 running (37m) 4m ago 37m 67.3M 1536M 16.2.7 231fd40524c4 3e3315fbb46a osd.5 hcictrl01 running (37m) 4m ago 37m 87.5M 1536M 16.2.7 231fd40524c4 1f90169412a5 osd.6 hcictrl03 running (36m) 4m ago 36m 74.4M 1536M 16.2.7 231fd40524c4 d26811c0c9fc osd.7 hcictrl03 running (36m) 4m ago 36m 98.8M 1536M 16.2.7 231fd40524c4 1ce10dcfa6ca rgw.rgw.hcictrl01.ceruvj hcictrl01 *:7480 running (35m) 4m ago 35m 52.1M - 16.2.7 231fd40524c4 c2c45767cda5 rgw.rgw.hcictrl02.pzgwht hcictrl02 *:7480 running (35m) 4m ago 35m 52.1M - 16.2.7 231fd40524c4 461c0869e559 rgw.rgw.hcictrl03.ryrptr hcictrl03 *:7480 running (35m) 4m ago 35m 54.4M - 16.2.7 231fd40524c4 8fac107de350 [root@hcictrl01 stack_orchestrator]# ceph config dump | grep -i image global basic container_image quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e * [root@hcictrl01 stack_orchestrator]# ceph health detail HEALTH_WARN mons are allowing insecure global_id reclaim [WRN] AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED: mons are allowing insecure global_id reclaim mon.hcictrl01 has auth_allow_insecure_global_id_reclaim set to true mon.hcictrl02 has auth_allow_insecure_global_id_reclaim set to true mon.hcictrl03 has auth_allow_insecure_global_id_reclaim set to true container on bootstrap node:(image name is uniform across all dameons) [root@hcictrl01 stack_orchestrator]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6f866f4be92 quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e -n client.crash.h... 43 minutes ago Up 43 minutes ago ceph-c5aa753a-8422-11ec-b231-0015171590ba-crash-hcictrl01 03284b111258 quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e -n osd.3 -f --set... 41 minutes ago Up 41 minutes ago ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-3 3e3315fbb46a quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e -n osd.4 -f --set... 40 minutes ago Up 40 minutes ago ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-4 1f90169412a5 quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e -n osd.5 -f --set... 40 minutes ago Up 40 minutes ago ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-5 9b3aeab68115 quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e -n mds.cephfs.hci... 38 minutes ago Up 38 minutes ago ceph-c5aa753a-8422-11ec-b231-0015171590ba-mds-cephfs-hcictrl01-vuamjy c2c45767cda5 quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e -n client.rgw.rgw... 38 minutes ago Up 38 minutes ago ceph-c5aa753a-8422-11ec-b231-0015171590ba-rgw-rgw-hcictrl01-ceruvj 448c6fab2b98 quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e -n mgr.hcictrl01.... 18 minutes ago Up 18 minutes ago ceph-c5aa753a-8422-11ec-b231-0015171590ba-mgr-hcictrl01-ljtznv b2a8bdbd6983 quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e -n mon.hcictrl01 ... 18 minutes ago Up 18 minutes ago ceph-c5aa753a-8422-11ec-b231-0015171590ba-mon-hcictrl01 So, we are confused on the container image usage in an ideal ceph cluster; whether there should be two images, both images should be the same or not, should we run the 'upgrade' ideally on cluster creations ? etc.. Please mind the length of the response. Thanks in advance. Thanks and Regards, Arun Vinod On Tue, 1 Feb 2022 at 14:48, Arun Vinod <arunvinod.tech@xxxxxxxxx> wrote: > Hi Adam, > > Thanks for replying. > > I have tried the "ceph orch upgrade start <image-name>" as a workaround > and it works as expected. All the daemons are recreated with the stable > version of the image. However, it still requires an initial fetch of the > latest-pacific-devel image and creation of daemons with that. > > Also, cephadm bootstrap was not allowing --image option, so in order to > specify the image name I have updated the value of DEFAULT_IMAGE in the > cephadm script (/usr/bin/cephadm) as follows > > DEFAULT_IMAGE = 'quay.io/ceph/ceph:v16.2.7 > > I have attached two logs along with this email, first one is the cephadm > log from the bootstrap node and the second one is from the second host > after adding it to the cluster. > > bootstrap command used: > *sudo cephadm bootstrap --skip-monitoring-stack --mon-ip *.*.*.* > --cluster-network *.*.*.*/24 --ssh-user ceph_user --ssh-private-key > /home/ceph_user/.ssh/id_rsa --ssh-public-key > /home/ceph_user/.ssh/id_rsa.pub --config > /home/ceph_user/ceph_bootstrap/ceph.conf --no-minimize-config* > > containers in bootstrap host: > [root@hcictrl01 ~]# podman ps -a --format "{{.Image}} {{.Command}} > {{.Names}}" > quay.io/ceph/ceph:v16.2.7 -n mon.hcictrl01 ... > ceph-6eaf15d8-8332-11ec-b820-0015171590ba-mon-hcictrl01 > quay.io/ceph/ceph:v16.2.7 -n mgr.hcictrl01.... > ceph-6eaf15d8-8332-11ec-b820-0015171590ba-mgr-hcictrl01-vmdxbg > docker.io/ceph/daemon-base:latest-pacific-devel -n client.crash.h... > ceph-6eaf15d8-8332-11ec-b820-0015171590ba-crash-hcictrl01 > > host add command used: > *ceph orch host add hcictrl02 *.*.*.* --labels _admin* > > containers in second host: > [root@hcictrl02 ~]# podman ps -a --format "{{.Image}} {{.Command}} > {{.Names}}" > docker.io/ceph/daemon-base:latest-pacific-devel -n client.crash.h... > ceph-6eaf15d8-8332-11ec-b820-0015171590ba-crash-hcictrl02 > docker.io/ceph/daemon-base:latest-pacific-devel -n mgr.hcictrl02.... > ceph-6eaf15d8-8332-11ec-b820-0015171590ba-mgr-hcictrl02-xfbcwn > docker.io/ceph/daemon-base:latest-pacific-devel -n mon.hcictrl02 ... > ceph-6eaf15d8-8332-11ec-b820-0015171590ba-mon-hcictrl02 > > > However, does the following line have anything to do with this behaviour > of cephadm? > > https://github.com/ceph/ceph/blob/v16.2.7/src/common/options.cc#L459 > > [root@hcictrl01 ~]# ceph-conf -D | grep -i container_image > > container_image = docker.io/ceph/daemon-base:latest-pacific-devel > > > Thanks and Regards, > Arun Vinod > > > > On Mon, 31 Jan 2022 at 22:25, Adam King <adking@xxxxxxxxxx> wrote: > >> Hi Arun, >> >> Not sure exactly how things got this way. When you provide "--image >> <image-name>" when bootstrapping that should set the image to be used for >> all ceph containers. I've never seen just the bootstrap mgr/mon get a >> totally different image. Would be interesting to maybe see the full >> bootstrap output here as this issue is new to me. >> >> As for resolving the issue, you should be able to use the upgrade >> procedure to get all the containers on the right image. Just "ceph orch >> upgrade start <image-name>" then just keep checking "ceph orch upgrade >> status" until it no longer says it's in progress. That should get all the >> ceph daemons on whatever image it is you specify in the upgrade start >> command and cause future ceph daemons to be deployed with that image as >> well. >> >> - Adam King >> >> On Mon, Jan 31, 2022 at 10:08 AM Arun Vinod <arunvinod.tech@xxxxxxxxx> >> wrote: >> >>> Hi All, >>> >>> How can change the default behaviour of cepham to use stable container >>> images instead of default latest/devel images. >>> >>> By default when we try to bootstrap a cluster and add two additional >>> hosts >>> after bootstrap finished, dameons are created on two container images. >>> Which are, *quay.io/ceph/ceph:v16 <http://quay.io/ceph/ceph:v16>* and >>> *docker.io/ceph/daemon-base:latest-pacific-devel >>> <http://docker.io/ceph/daemon-base:latest-pacific-devel>*. >>> >>> The ceph:v16 image from quay.io is stable but the second image from >>> docker.io is not , due to the latest/devel tags which are basically >>> untested images according to ceph. >>> >>> How can we specify cephadm to use a stable version of image for >>> *daemon-base.* >>> >>> The following is the final list of containers created on the cluster >>> after >>> bootstrap is finished and another host added. Here the mon, mgr daemons >>> on >>> both hosts are running on different container image. Most importantly, >>> most >>> of the containers are running on latest-pacific-devel containers, which >>> is >>> not apt for a production cluster. >>> >>> In bootstrap node: >>> CONTAINER ID IMAGE COMMAND >>> CREATED STATUS PORTS NAMES >>> afeb6f92deb2 quay.io/ceph/ceph:v16.2.7 -n >>> mon.hcictrl01 ... 4 hours ago Up 4 hours ago >>> ceph-e8200504-8287-11ec-a14f-0015171590ba-mon-hcictrl01 >>> c43d48766a08 quay.io/ceph/ceph:v16.2.7 -n >>> mgr.hcictrl01.... 4 hours ago Up 4 hours ago >>> ceph-e8200504-8287-11ec-a14f-0015171590ba-mgr-hcictrl01-rmosyh >>> d70cea0fd561 docker.io/ceph/daemon-base:latest-pacific-devel -n >>> client.crash.h... 4 hours ago Up 4 hours ago >>> ceph-e8200504-8287-11ec-a14f-0015171590ba-crash-hcictrl01 >>> >>> In rest of the nodes: >>> CONTAINER ID IMAGE COMMAND >>> CREATED STATUS PORTS NAMES >>> d816ee470753 docker.io/ceph/daemon-base:latest-pacific-devel -n >>> client.crash.h... 6 minutes ago Up 6 minutes ago >>> ceph-e8200504-8287-11ec-a14f-0015171590ba-crash-hcictrl02 >>> dde4646f4819 docker.io/ceph/daemon-base:latest-pacific-devel -n >>> mgr.hcictrl02.... 6 minutes ago Up 6 minutes ago >>> ceph-e8200504-8287-11ec-a14f-0015171590ba-mgr-hcictrl02-hfhapx >>> 12191a039525 docker.io/ceph/daemon-base:latest-pacific-devel -n >>> mon.hcictrl02 ... 6 minutes ago Up 6 minutes ago >>> ceph-e8200504-8287-11ec-a14f-0015171590ba-mon-hcictrl02 >>> >>> Can someone help to explain how cepham chooses this default image or any >>> workaround to choose a specific image instead of devel images. >>> >>> Thanks in advance. >>> _______________________________________________ >>> ceph-users mailing list -- ceph-users@xxxxxxx >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx >>> >>> _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx