Hi All, We are trying to deploy the ceph (12.6.7) cluster on production using cephadm. Unfortunately, we encountered the following situation. Description The cephadm(v16.2.7) bootstrap by default chooses container images quay.io/ceph/ceph:v16 and docker.io/ceph/daemon-base:latest-pacific-devel. Since we want to avoid using devel and latest container images in production, we pulled the required images (with static tags) prior to running bootstrap. Also, we mentioned the image name and --skip-pull parameter in bootstrap command. Still cephadm uses the image docker.io/ceph/daemon-base:latest-pacific-devel for some of the daemons and it is still pulling the image even though --skip-pull is mentioned. Due to this, daemons on different host's are running on different versions of container images. Hence, there is no provision to use a specific image instead of docker.io/ceph/daemon-base:latest-pacific-devel during bootstrap for consistency across all daemons in the cluster. Similary the same behaviour exists while creating daemons using ceph-orch. Command used to bootstrap a cluster(stable container images are already pulled in prior): sudo cephadm --image quay.io/ceph/ceph:v16.2.7 bootstrap --skip-monitoring-stack --mon-ip ... --cluster-network ... --ssh-user ceph_user --config /home/ceph_user/ceph_bootstrap/ceph.conf --initial-dashboard-password Q5446UBS3KK9 --dashboard-password-noupdate --no-minimize-config --skip-pull Below are some entries from cephadm.log, which clearly shows its trying to pull image even --skip-pull is provided: 2022-01-27 17:11:13,900 7f01b6621b80 INFO Deploying mon service with default placement... 2022-01-27 17:11:14,212 7f211cc85b80 DEBUG -------------------------------------------------------------------------------- cephadm ['--image', 'docker.io/ceph/daemon-base:latest-pacific-devel', 'ls'] 2022-01-27 17:11:14,296 7f211cc85b80 DEBUG /bin/podman: 3.3.1 2022-01-27 17:11:14,660 7f211cc85b80 DEBUG /bin/podman: 4da6ea847240,24.26MB / 134.9GB 2022-01-27 17:11:14,660 7f211cc85b80 DEBUG /bin/podman: 52b12ff050d8,390.7MB / 134.9GB 2022-01-27 17:11:14,660 7f211cc85b80 DEBUG /bin/podman: 5c979c84d182,4.342MB / 134.9GB 2022-01-27 17:11:14,766 7f211cc85b80 DEBUG systemctl: enabled 2022-01-27 17:11:14,778 7f211cc85b80 DEBUG systemctl: active 2022-01-27 17:11:14,912 7f211cc85b80 DEBUG /bin/podman: 52b12ff050d88841131aa6508f7576a1dca8e0004db08384dd13dca6c2d3b725, quay.io/ceph/ceph:v16.2.7,cc266d6139f4d044d28ace2308f7befcdfead3c3e88bc3faed905298cae299ef,2022-01-27 17:10:33.135056074 +0530 IST, 2022-01-27 17:11:15,059 7f211cc85b80 DEBUG /bin/podman: [ quay.io/ceph/ceph@sha256:2f7f0af8663e73a422f797de605e769ae44eb0297f2a79324739404cc1765728 quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54 ] 2022-01-27 17:11:15,456 7f01b6621b80 DEBUG /usr/bin/ceph: Scheduled mon update... 2022-01-27 17:11:15,641 7f211cc85b80 DEBUG /bin/podman: ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable) 2022-01-27 17:11:15,972 7f01b6621b80 INFO Deploying mgr service with default placement... 2022-01-27 17:11:16,127 7f211cc85b80 DEBUG systemctl: enabled 2022-01-27 17:11:16,140 7f211cc85b80 DEBUG systemctl: active 2022-01-27 17:11:16,296 7f211cc85b80 DEBUG /bin/podman: 4da6ea847240bab786f596ddc87160e11056c74aa7004dc38ee12be331a5ea4e, quay.io/ceph/ceph:v16.2.7,cc266d6139f4d044d28ace2308f7befcdfead3c3e88bc3faed905298cae299ef,2022-01-27 17:10:25.830630277 +0530 IST, 2022-01-27 17:11:17,023 7f0b0c505b80 DEBUG -------------------------------------------------------------------------------- cephadm ['--image', 'docker.io/ceph/daemon-base:latest-pacific-devel', 'ceph-volume', '--fsid', 'e3c9bff6-7f65-11ec-bdff-0015171590ba', '--', 'inventory', '--format=json-pretty', '--filter-for-batch'] 2022-01-27 17:11:17,102 7f0b0c505b80 DEBUG /bin/podman: 3.3.1 2022-01-27 17:11:17,275 7f0b0c505b80 DEBUG /bin/podman: 4da6ea847240,24.71MB / 134.9GB 2022-01-27 17:11:17,275 7f0b0c505b80 DEBUG /bin/podman: 52b12ff050d8,390.8MB / 134.9GB 2022-01-27 17:11:17,275 7f0b0c505b80 DEBUG /bin/podman: d242f1fa7a66,28.28MB / 134.9GB 2022-01-27 17:11:17,417 7f0b0c505b80 INFO Inferring config /var/lib/ceph/e3c9bff6-7f65-11ec-bdff-0015171590ba/mon.hcictrl01/config 2022-01-27 17:11:17,417 7f0b0c505b80 DEBUG Using specified fsid: e3c9bff6-7f65-11ec-bdff-0015171590ba 2022-01-27 17:11:17,620 7f01b6621b80 DEBUG /usr/bin/ceph: Scheduled mgr update... 2022-01-27 17:11:17,727 7f0b0c505b80 DEBUG stat: Trying to pull docker.io/ceph/daemon-base:latest-pacific-devel... 2022-01-27 17:11:18,489 7f01b6621b80 INFO Deploying crash service with default placement... 2022-01-27 17:11:18,763 7f3ed21eeb80 DEBUG sestatus: SELinux status: disabled 2022-01-27 17:11:18,768 7f3ed21eeb80 DEBUG sestatus: SELinux status: disabled 2022-01-27 17:11:18,774 7f3ed21eeb80 DEBUG sestatus: SELinux status: disabled 2022-01-27 17:11:18,779 7f3ed21eeb80 DEBUG sestatus: SELinux status: disabled 2022-01-27 17:11:18,784 7f3ed21eeb80 DEBUG sestatus: SELinux status: disabled 2022-01-27 17:11:18,789 7f3ed21eeb80 DEBUG sestatus: SELinux status: disabled 2022-01-27 17:11:19,434 7f75157f1b80 DEBUG -------------------------------------------------------------------------------- cephadm ['--image', 'docker.io/ceph/daemon-base:latest-pacific-devel', 'deploy', '--fsid', 'e3c9bff6-7f65-11ec-bdff-0015171590ba', '--name', 'mgr.hcictrl01.njkjzk', '--meta-json', '{"service_name": "mgr", "ports": [9283], "ip": null, "deployed_by": [" quay.io/ceph/ceph@sha256:2f7f0af8663e73a422f797de605e769ae44eb0297f2a79324739404cc1765728", " quay.io/ceph/ceph@sha256:bb6a71f7f481985f6d3b358e3b9ef64c6755b3db5aa53198e0aac38be5c8ae54"], "rank": null, "rank_generation": null}', '--config-json', '-', '--tcp-ports', '9283', '--reconfig'] 2022-01-27 17:11:19,542 7f75157f1b80 DEBUG /bin/podman: 3.3.1 OS : CentOS Stream release 8 Kernel : Linux hcictrl01 4.18.0-348.2.1.el8_5.x86_64 Improve README #1 SMP Podman version 3.3.1 Ceph version : Ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable) Any insight into the matter will be highly appreciated. Thanks in advance, Arun Vinod _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx