Re: cephadm picks development/latest tagged image for daemon-base (docker.io/ceph/daemon-base:latest-pacific-devel)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Adam,

Big Thanks for the responses and clarifying the global usage of the --image
parameter.  Eventhough, I gave --image during bootstrap only mgr & mon
daemons on the bootstrap host are getting created with that image and the
rest of the demons are created on the image daemon-base as I mentioned
earlier.

So, there are two images coming into action here. First one can be
controlled with the --image parameter in bootstrap( which worked when
supplied in front of bootstrap keyword).
The second container image is controlled by the variable 'container_image'
which is set to 'docker.io/ceph/daemon-base:latest-pacific-devel' by
default.
Even Though it can be modified at runtime after bootstrap the existing
daemons will not be modified. But, that case can be handled with the 'ceph
orch upgrade' command like you mentioned at first.
However, it is observed that if we mention this variable in the bootstrap
config file, all the daemons will be created with the mentioned image from
bootstrap itself.

So, the take away from this is we mention the first image using the
'--image' argument in bootstrap command and second image using the variable
'container_image' in the bootstrap config file all dameons will be created
with the same image.

So the question is, is cephadm really require two images?

Also, one more observation I had is; even though I gave the same image in
both the above said provisions, I can see a difference among the same type
of daemons created on different hosts. (Even Though all dameons uses single
image in effect)

Following is a result of a cluster created on 3 hosts. The bootstrap
command is below:(rest of the services are deployed using ceph orch)

'sudo cephadm --image quay.io/ceph/ceph:v16.2.7 bootstrap
--skip-monitoring-stack --mon-ip 10.175.41.11 --cluster-network
10.175.42.0/24 --ssh-user ceph_deploy --ssh-private-key
/home/ceph_deploy/.ssh/id_rsa --ssh-public-key
/home/ceph_deploy/.ssh/id_rsa.pub --config
/home/ceph_deploy/ceph_bootstrap/ceph.conf --initial-dashboard-password
Qwe4Rt6D33 --dashboard-password--noupdate --no-minimize-config

[root@hcictrl01 stack_orchestrator]# ceph orch ls
NAME        PORTS   RUNNING  REFRESHED  AGE  PLACEMENT

crash                   3/3  9m ago     15m  *

mds.cephfs              3/3  9m ago     9m
hcictrl02;hcictrl03;hcictrl01;count:3
mgr                     3/3  9m ago     13m
 hcictrl02;hcictrl03;hcictrl01;count:3
mon                     3/5  9m ago     15m  count:5

osd                       8  9m ago     -    <unmanaged>

rgw.rgw     ?:7480      3/3  9m ago     9m
hcictrl02;hcictrl03;hcictrl01;count:3


[root@hcictrl01 stack_orchestrator]# ceph orch ps
NAME                         HOST       PORTS   STATUS         REFRESHED
 AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID
crash.hcictrl01              hcictrl01          running (15m)     9m ago
 15m    6983k        -  16.2.7   231fd40524c4  f6f866f4be92
crash.hcictrl02              hcictrl02          running (14m)     9m ago
 14m    6987k        -  16.2.7   231fd40524c4  1cb62e191c07
crash.hcictrl03              hcictrl03          running (14m)     9m ago
 14m    6995k        -  16.2.7   231fd40524c4  3e03f99065c0
mds.cephfs.hcictrl01.vuamjy  hcictrl01          running (10m)     9m ago
 10m    13.0M        -  16.2.7   231fd40524c4  9b3aeab68115
mds.cephfs.hcictrl02.myohpi  hcictrl02          running (10m)     9m ago
 10m    15.6M        -  16.2.7   231fd40524c4  5cded1208028
mds.cephfs.hcictrl03.jziler  hcictrl03          running (10m)     9m ago
 10m    12.6M        -  16.2.7   231fd40524c4  94dccd01a123
mgr.hcictrl01.ljtznv         hcictrl01  *:9283  running (16m)     9m ago
 16m     428M        -  16.2.7   231fd40524c4  5bc89cc72b37
mgr.hcictrl02.izfnvh         hcictrl02  *:8443  running (14m)     9m ago
 14m     382M        -  16.2.7   231fd40524c4  acd435a8b6b1
mgr.hcictrl03.eekrgo         hcictrl03  *:8443  running (14m)     9m ago
 14m     382M        -  16.2.7   231fd40524c4  3c241b35a3fe
mon.hcictrl01                hcictrl01          running (16m)     9m ago
 16m    79.9M    2048M  16.2.7   231fd40524c4  19a2db98043a
mon.hcictrl02                hcictrl02          running (14m)     9m ago
 14m    88.2M    2048M  16.2.7   231fd40524c4  4941ee015e75
mon.hcictrl03                hcictrl03          running (14m)     9m ago
 14m    86.2M    2048M  16.2.7   231fd40524c4  150cb8e9d25d
osd.0                        hcictrl02          running (13m)     9m ago
 13m    31.8M    1536M  16.2.7   231fd40524c4  9dc98cc6ba3d
osd.1                        hcictrl02          running (13m)     9m ago
 13m    42.4M    1536M  16.2.7   231fd40524c4  62caab356b02
osd.2                        hcictrl02          running (12m)     9m ago
 12m    58.9M    1536M  16.2.7   231fd40524c4  38ea17598fa5
osd.3                        hcictrl01          running (12m)     9m ago
 12m    35.1M    1536M  16.2.7   231fd40524c4  03284b111258
osd.4                        hcictrl01          running (11m)     9m ago
 11m    44.1M    1536M  16.2.7   231fd40524c4  3e3315fbb46a
osd.5                        hcictrl01          running (11m)     9m ago
 11m    54.6M    1536M  16.2.7   231fd40524c4  1f90169412a5
osd.6                        hcictrl03          running (11m)     9m ago
 11m    35.2M    1536M  16.2.7   231fd40524c4  d26811c0c9fc
osd.7                        hcictrl03          running (10m)     9m ago
 10m    59.1M    1536M  16.2.7   231fd40524c4  1ce10dcfa6ca
rgw.rgw.hcictrl01.ceruvj     hcictrl01  *:7480  running (9m)      9m ago
9m    49.6M        -  16.2.7   231fd40524c4  c2c45767cda5
rgw.rgw.hcictrl02.pzgwht     hcictrl02  *:7480  running (9m)      9m ago
9m    18.6M        -  16.2.7   231fd40524c4  461c0869e559
rgw.rgw.hcictrl03.ryrptr     hcictrl03  *:7480  running (9m)      9m ago
9m    20.9M        -  16.2.7   231fd40524c4  8fac107de350

[root@hcictrl01 stack_orchestrator]# ceph config dump | grep -i image
global                                     basic     container_image

quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
 *

container list on bootstrap host:(image name is not uniform across all
dameons)

[root@hcictrl01 stack_orchestrator]# podman ps
CONTAINER ID  IMAGE
                             COMMAND               CREATED         STATUS
          PORTS       NAMES
19a2db98043a  quay.io/ceph/ceph:v16.2.7
                             -n mon.hcictrl01 ...  17 minutes ago  Up 17
minutes ago
 ceph-c5aa753a-8422-11ec-b231-0015171590ba-mon-hcictrl01
5bc89cc72b37  quay.io/ceph/ceph:v16.2.7
                             -n mgr.hcictrl01....  17 minutes ago  Up 17
minutes ago
 ceph-c5aa753a-8422-11ec-b231-0015171590ba-mgr-hcictrl01-ljtznv
f6f866f4be92
quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
 -n client.crash.h...  15 minutes ago  Up 15 minutes ago
 ceph-c5aa753a-8422-11ec-b231-0015171590ba-crash-hcictrl01
03284b111258
quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
 -n osd.3 -f --set...  12 minutes ago  Up 12 minutes ago
 ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-3
3e3315fbb46a
quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
 -n osd.4 -f --set...  12 minutes ago  Up 12 minutes ago
 ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-4
1f90169412a5
quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
 -n osd.5 -f --set...  11 minutes ago  Up 11 minutes ago
 ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-5
9b3aeab68115
quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
 -n mds.cephfs.hci...  10 minutes ago  Up 10 minutes ago
 ceph-c5aa753a-8422-11ec-b231-0015171590ba-mds-cephfs-hcictrl01-vuamjy
c2c45767cda5
quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
 -n client.rgw.rgw...  10 minutes ago  Up 10 minutes ago
 ceph-c5aa753a-8422-11ec-b231-0015171590ba-rgw-rgw-hcictrl01-ceruvj

In the above output, the mon and mgr on bootstrap node is referring to the
container image with tag (v16.2.7 ) unlike the rest of the daemons where it
refers with the image digest.
Even Though the digest belong to the same image, a non-uniform behaviour is
observed in the  warning generated by mons(see below)

[root@hcictrl01 stack_orchestrator]# ceph health detail
HEALTH_WARN mons are allowing insecure global_id reclaim
[WRN] AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED: mons are allowing insecure
global_id reclaim
    mon.hcictrl02 has auth_allow_insecure_global_id_reclaim set to true
    mon.hcictrl03 has auth_allow_insecure_global_id_reclaim set to true


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux