Re: cephadm picks development/latest tagged image for daemon-base (docker.io/ceph/daemon-base:latest-pacific-devel)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Arun,

As you pointed out in your message, those containers whose image name is
using a container digest are the same container as the two using the tag
(you know for sure because the image ids in "ceph orch ps" don't differ for
those daemons). The reason for this difference is that the first mgr and
mon are deployed directly by bootstrap while all the other daemons were
deployed by the cephadm mgr module later on. The cephadm mgr module handles
converting image tags to digests. so those first two daemons aren't able to
be deployed using the digest name, but ultimately this should be irrelevant
as, as mentioned before, they're actually the same image in this case. So,
to sort of answer your question, there is only one image being used in the
cluster (the one specified with --image in bootstrap) and the only
difference between that first mgr and mon and all the other daemons is
purely superficial. There is no need to use upgrade on the cluster right
after deploying when using the --image flag. Unfortunately, I can't really
speak to this specific mon health warning. Maybe something related to the
config file passed? It's possible just redeploying that mon without the
health warning may have got it in line with the others. Not really sure on
that front.

- Adam King

On Thu, Feb 3, 2022 at 2:24 AM Arun Vinod <arunvinod.tech@xxxxxxxxx> wrote:

> Hi Adam,
>
> Big Thanks for the responses and clarifying the global usage of the
> --image parameter.  Eventhough, I gave --image during bootstrap only mgr &
> mon daemons on the bootstrap host are getting created with that image and
> the rest of the demons are created on the image daemon-base as I mentioned
> earlier.
>
> So, there are two images coming into action here. First one can be
> controlled with the --image parameter in bootstrap( which worked when
> supplied in front of bootstrap keyword).
> The second container image is controlled by the variable 'container_image'
> which is set to 'docker.io/ceph/daemon-base:latest-pacific-devel' by
> default.
> Even Though it can be modified at runtime after bootstrap the existing
> daemons will not be modified. But, that case can be handled with the 'ceph
> orch upgrade' command like you mentioned at first.
> However, it is observed that if we mention this variable in the bootstrap
> config file, all the daemons will be created with the mentioned image from
> bootstrap itself.
>
> So, the take away from this is we mention the first image using the
> '--image' argument in bootstrap command and second image using the variable
> 'container_image' in the bootstrap config file all dameons will be created
> with the same image.
>
> So the question is, is cephadm really require two images?
>
> Also, one more observation I had is; even though I gave the same image in
> both the above said provisions, I can see a difference among the same type
> of daemons created on different hosts. (Even Though all dameons uses single
> image in effect)
>
> Following is a result of a cluster created on 3 hosts. The bootstrap
> command is below:(rest of the services are deployed using ceph orch)
>
> 'sudo cephadm --image quay.io/ceph/ceph:v16.2.7 bootstrap
> --skip-monitoring-stack --mon-ip 10.175.41.11 --cluster-network
> 10.175.42.0/24 --ssh-user ceph_deploy --ssh-private-key
> /home/ceph_deploy/.ssh/id_rsa --ssh-public-key
> /home/ceph_deploy/.ssh/id_rsa.pub --config
> /home/ceph_deploy/ceph_bootstrap/ceph.conf --initial-dashboard-password
> Qwe4Rt6D33 --dashboard-password--noupdate --no-minimize-config
>
> [root@hcictrl01 stack_orchestrator]# ceph orch ls
> NAME        PORTS   RUNNING  REFRESHED  AGE  PLACEMENT
>
> crash                   3/3  9m ago     15m  *
>
> mds.cephfs              3/3  9m ago     9m
> hcictrl02;hcictrl03;hcictrl01;count:3
> mgr                     3/3  9m ago     13m
>  hcictrl02;hcictrl03;hcictrl01;count:3
> mon                     3/5  9m ago     15m  count:5
>
> osd                       8  9m ago     -    <unmanaged>
>
> rgw.rgw     ?:7480      3/3  9m ago     9m
> hcictrl02;hcictrl03;hcictrl01;count:3
>
>
> [root@hcictrl01 stack_orchestrator]# ceph orch ps
> NAME                         HOST       PORTS   STATUS         REFRESHED
>  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID
> crash.hcictrl01              hcictrl01          running (15m)     9m ago
>  15m    6983k        -  16.2.7   231fd40524c4  f6f866f4be92
> crash.hcictrl02              hcictrl02          running (14m)     9m ago
>  14m    6987k        -  16.2.7   231fd40524c4  1cb62e191c07
> crash.hcictrl03              hcictrl03          running (14m)     9m ago
>  14m    6995k        -  16.2.7   231fd40524c4  3e03f99065c0
> mds.cephfs.hcictrl01.vuamjy  hcictrl01          running (10m)     9m ago
>  10m    13.0M        -  16.2.7   231fd40524c4  9b3aeab68115
> mds.cephfs.hcictrl02.myohpi  hcictrl02          running (10m)     9m ago
>  10m    15.6M        -  16.2.7   231fd40524c4  5cded1208028
> mds.cephfs.hcictrl03.jziler  hcictrl03          running (10m)     9m ago
>  10m    12.6M        -  16.2.7   231fd40524c4  94dccd01a123
> mgr.hcictrl01.ljtznv         hcictrl01  *:9283  running (16m)     9m ago
>  16m     428M        -  16.2.7   231fd40524c4  5bc89cc72b37
> mgr.hcictrl02.izfnvh         hcictrl02  *:8443  running (14m)     9m ago
>  14m     382M        -  16.2.7   231fd40524c4  acd435a8b6b1
> mgr.hcictrl03.eekrgo         hcictrl03  *:8443  running (14m)     9m ago
>  14m     382M        -  16.2.7   231fd40524c4  3c241b35a3fe
> mon.hcictrl01                hcictrl01          running (16m)     9m ago
>  16m    79.9M    2048M  16.2.7   231fd40524c4  19a2db98043a
> mon.hcictrl02                hcictrl02          running (14m)     9m ago
>  14m    88.2M    2048M  16.2.7   231fd40524c4  4941ee015e75
> mon.hcictrl03                hcictrl03          running (14m)     9m ago
>  14m    86.2M    2048M  16.2.7   231fd40524c4  150cb8e9d25d
> osd.0                        hcictrl02          running (13m)     9m ago
>  13m    31.8M    1536M  16.2.7   231fd40524c4  9dc98cc6ba3d
> osd.1                        hcictrl02          running (13m)     9m ago
>  13m    42.4M    1536M  16.2.7   231fd40524c4  62caab356b02
> osd.2                        hcictrl02          running (12m)     9m ago
>  12m    58.9M    1536M  16.2.7   231fd40524c4  38ea17598fa5
> osd.3                        hcictrl01          running (12m)     9m ago
>  12m    35.1M    1536M  16.2.7   231fd40524c4  03284b111258
> osd.4                        hcictrl01          running (11m)     9m ago
>  11m    44.1M    1536M  16.2.7   231fd40524c4  3e3315fbb46a
> osd.5                        hcictrl01          running (11m)     9m ago
>  11m    54.6M    1536M  16.2.7   231fd40524c4  1f90169412a5
> osd.6                        hcictrl03          running (11m)     9m ago
>  11m    35.2M    1536M  16.2.7   231fd40524c4  d26811c0c9fc
> osd.7                        hcictrl03          running (10m)     9m ago
>  10m    59.1M    1536M  16.2.7   231fd40524c4  1ce10dcfa6ca
> rgw.rgw.hcictrl01.ceruvj     hcictrl01  *:7480  running (9m)      9m ago
> 9m    49.6M        -  16.2.7   231fd40524c4  c2c45767cda5
> rgw.rgw.hcictrl02.pzgwht     hcictrl02  *:7480  running (9m)      9m ago
> 9m    18.6M        -  16.2.7   231fd40524c4  461c0869e559
> rgw.rgw.hcictrl03.ryrptr     hcictrl03  *:7480  running (9m)      9m ago
> 9m    20.9M        -  16.2.7   231fd40524c4  8fac107de350
>
> [root@hcictrl01 stack_orchestrator]# ceph config dump | grep -i image
> global                                     basic     container_image
>
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  *
>
> container list on bootstrap host:(image name is not uniform across all
> dameons)
>
> [root@hcictrl01 stack_orchestrator]# podman ps
> CONTAINER ID  IMAGE
>                                COMMAND               CREATED         STATUS
>             PORTS       NAMES
> 19a2db98043a  quay.io/ceph/ceph:v16.2.7
>                                -n mon.hcictrl01 ...  17 minutes ago  Up 17
> minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-mon-hcictrl01
> 5bc89cc72b37  quay.io/ceph/ceph:v16.2.7
>                                -n mgr.hcictrl01....  17 minutes ago  Up 17
> minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-mgr-hcictrl01-ljtznv
> f6f866f4be92
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n client.crash.h...  15 minutes ago  Up 15 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-crash-hcictrl01
> 03284b111258
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n osd.3 -f --set...  12 minutes ago  Up 12 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-3
> 3e3315fbb46a
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n osd.4 -f --set...  12 minutes ago  Up 12 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-4
> 1f90169412a5
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n osd.5 -f --set...  11 minutes ago  Up 11 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-5
> 9b3aeab68115
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n mds.cephfs.hci...  10 minutes ago  Up 10 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-mds-cephfs-hcictrl01-vuamjy
> c2c45767cda5
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n client.rgw.rgw...  10 minutes ago  Up 10 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-rgw-rgw-hcictrl01-ceruvj
>
> In the above output, the mon and mgr on bootstrap node is referring to the
> container image with tag (v16.2.7 ) unlike the rest of the daemons where it
> refers with the image digest.
> Even Though the digest belong to the same image, a non-uniform behaviour
> is observed in the  warning generated by mons(see below)
>
> [root@hcictrl01 stack_orchestrator]# ceph health detail
> HEALTH_WARN mons are allowing insecure global_id reclaim
> [WRN] AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED: mons are allowing insecure
> global_id reclaim
>     mon.hcictrl02 has auth_allow_insecure_global_id_reclaim set to true
>     mon.hcictrl03 has auth_allow_insecure_global_id_reclaim set to true
>
> From the above outputs, it can be observed that even the mons on all hosts
> are created with the same image, only the mon on host2 and host3 have the
> warning about auth_allow_insecure_global_id_reclaim(which gives it a
> suspicious about the code running mon on bootstrap host)
>
> ======================================
>
> Once we upgrade all daemons to the same image, the mon and mgr on
> bootstrap node is redeployed and the auth_allow_insecure_global_id_reclaim
> warning can be observed uniformly across all mons.
>
> ceph orch upgrade start quay.io/ceph/ceph:v16.2.7
>
> [root@hcictrl01 stack_orchestrator]# ceph orch upgrade status
> {
>     "target_image": null,
>     "in_progress": false,
>     "services_complete": [],
>     "progress": null,
>     "message": ""
> }
>
> [root@hcictrl01 stack_orchestrator]# ceph orch ps
> NAME                         HOST       PORTS   STATUS         REFRESHED
>  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID
> crash.hcictrl01              hcictrl01          running (40m)     4m ago
>  40m    6975k        -  16.2.7   231fd40524c4  f6f866f4be92
> crash.hcictrl02              hcictrl02          running (40m)     4m ago
>  40m    6975k        -  16.2.7   231fd40524c4  1cb62e191c07
> crash.hcictrl03              hcictrl03          running (39m)     4m ago
>  39m    6979k        -  16.2.7   231fd40524c4  3e03f99065c0
> mds.cephfs.hcictrl01.vuamjy  hcictrl01          running (35m)     4m ago
>  35m    15.5M        -  16.2.7   231fd40524c4  9b3aeab68115
> mds.cephfs.hcictrl02.myohpi  hcictrl02          running (35m)     4m ago
>  35m    16.7M        -  16.2.7   231fd40524c4  5cded1208028
> mds.cephfs.hcictrl03.jziler  hcictrl03          running (35m)     4m ago
>  35m    15.4M        -  16.2.7   231fd40524c4  94dccd01a123
> mgr.hcictrl01.ljtznv         hcictrl01  *:8443  running (15m)     4m ago
>  42m     387M        -  16.2.7   231fd40524c4  448c6fab2b98
> mgr.hcictrl02.izfnvh         hcictrl02  *:8443  running (40m)     4m ago
>  40m     448M        -  16.2.7   231fd40524c4  acd435a8b6b1
> mgr.hcictrl03.eekrgo         hcictrl03  *:8443  running (39m)     4m ago
>  39m     384M        -  16.2.7   231fd40524c4  3c241b35a3fe
> mon.hcictrl01                hcictrl01          running (15m)     4m ago
>  42m     123M    2048M  16.2.7   231fd40524c4  b2a8bdbd6983
> mon.hcictrl02                hcictrl02          running (40m)     4m ago
>  40m     135M    2048M  16.2.7   231fd40524c4  4941ee015e75
> mon.hcictrl03                hcictrl03          running (39m)     4m ago
>  39m     128M    2048M  16.2.7   231fd40524c4  150cb8e9d25d
> osd.0                        hcictrl02          running (39m)     4m ago
>  39m    66.4M    1536M  16.2.7   231fd40524c4  9dc98cc6ba3d
> osd.1                        hcictrl02          running (39m)     4m ago
>  39m    69.2M    1536M  16.2.7   231fd40524c4  62caab356b02
> osd.2                        hcictrl02          running (38m)     4m ago
>  38m    88.7M    1536M  16.2.7   231fd40524c4  38ea17598fa5
> osd.3                        hcictrl01          running (38m)     4m ago
>  38m    72.1M    1536M  16.2.7   231fd40524c4  03284b111258
> osd.4                        hcictrl01          running (37m)     4m ago
>  37m    67.3M    1536M  16.2.7   231fd40524c4  3e3315fbb46a
> osd.5                        hcictrl01          running (37m)     4m ago
>  37m    87.5M    1536M  16.2.7   231fd40524c4  1f90169412a5
> osd.6                        hcictrl03          running (36m)     4m ago
>  36m    74.4M    1536M  16.2.7   231fd40524c4  d26811c0c9fc
> osd.7                        hcictrl03          running (36m)     4m ago
>  36m    98.8M    1536M  16.2.7   231fd40524c4  1ce10dcfa6ca
> rgw.rgw.hcictrl01.ceruvj     hcictrl01  *:7480  running (35m)     4m ago
>  35m    52.1M        -  16.2.7   231fd40524c4  c2c45767cda5
> rgw.rgw.hcictrl02.pzgwht     hcictrl02  *:7480  running (35m)     4m ago
>  35m    52.1M        -  16.2.7   231fd40524c4  461c0869e559
> rgw.rgw.hcictrl03.ryrptr     hcictrl03  *:7480  running (35m)     4m ago
>  35m    54.4M        -  16.2.7   231fd40524c4  8fac107de350
>
>
> [root@hcictrl01 stack_orchestrator]# ceph config dump | grep -i image
> global                                     basic     container_image
>
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  *
>
> [root@hcictrl01 stack_orchestrator]# ceph health detail
> HEALTH_WARN mons are allowing insecure global_id reclaim
> [WRN] AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED: mons are allowing insecure
> global_id reclaim
>     mon.hcictrl01 has auth_allow_insecure_global_id_reclaim set to true
>     mon.hcictrl02 has auth_allow_insecure_global_id_reclaim set to true
>     mon.hcictrl03 has auth_allow_insecure_global_id_reclaim set to true
>
> container on bootstrap node:(image name is uniform across all dameons)
>
> [root@hcictrl01 stack_orchestrator]# podman ps
> CONTAINER ID  IMAGE
>                                COMMAND               CREATED         STATUS
>             PORTS       NAMES
> f6f866f4be92
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n client.crash.h...  43 minutes ago  Up 43 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-crash-hcictrl01
> 03284b111258
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n osd.3 -f --set...  41 minutes ago  Up 41 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-3
> 3e3315fbb46a
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n osd.4 -f --set...  40 minutes ago  Up 40 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-4
> 1f90169412a5
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n osd.5 -f --set...  40 minutes ago  Up 40 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-osd-5
> 9b3aeab68115
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n mds.cephfs.hci...  38 minutes ago  Up 38 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-mds-cephfs-hcictrl01-vuamjy
> c2c45767cda5
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n client.rgw.rgw...  38 minutes ago  Up 38 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-rgw-rgw-hcictrl01-ceruvj
> 448c6fab2b98
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n mgr.hcictrl01....  18 minutes ago  Up 18 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-mgr-hcictrl01-ljtznv
> b2a8bdbd6983
> quay.io/ceph/ceph@sha256:ac9f32e9a27ded104c2ed64ddbe112d67a461edafb6a7a53525e72fc69df759e
>  -n mon.hcictrl01 ...  18 minutes ago  Up 18 minutes ago
>  ceph-c5aa753a-8422-11ec-b231-0015171590ba-mon-hcictrl01
>
> So, we are confused on the container image usage in an ideal ceph cluster;
> whether there should be two images, both images should be the same or not,
> should we run the 'upgrade' ideally on cluster creations ? etc..
>
> Please mind the length of the response.
>
> Thanks in advance.
>
> Thanks and Regards,
> Arun Vinod
>
>
> On Tue, 1 Feb 2022 at 14:48, Arun Vinod <arunvinod.tech@xxxxxxxxx> wrote:
>
>> Hi Adam,
>>
>> Thanks for replying.
>>
>> I have tried the "ceph orch upgrade start <image-name>" as a workaround
>> and it works as expected. All the daemons are recreated with the stable
>> version of the image. However, it still requires an initial fetch of the
>> latest-pacific-devel image and creation of daemons with that.
>>
>> Also,  cephadm bootstrap was not allowing --image option, so in order to
>> specify the image name I have updated the value of DEFAULT_IMAGE  in the
>> cephadm script  (/usr/bin/cephadm) as follows
>>
>> DEFAULT_IMAGE = 'quay.io/ceph/ceph:v16.2.7
>>
>> I have attached two logs along with this email, first one is the cephadm
>> log from the bootstrap node and the second one is from the second host
>> after adding it to the cluster.
>>
>> bootstrap command  used:
>> *sudo cephadm bootstrap --skip-monitoring-stack --mon-ip *.*.*.*
>> --cluster-network *.*.*.*/24 --ssh-user ceph_user --ssh-private-key
>> /home/ceph_user/.ssh/id_rsa --ssh-public-key
>> /home/ceph_user/.ssh/id_rsa.pub --config
>> /home/ceph_user/ceph_bootstrap/ceph.conf  --no-minimize-config*
>>
>> containers in bootstrap host:
>> [root@hcictrl01 ~]# podman ps -a --format "{{.Image}} {{.Command}}
>> {{.Names}}"
>> quay.io/ceph/ceph:v16.2.7 -n mon.hcictrl01 ...
>> ceph-6eaf15d8-8332-11ec-b820-0015171590ba-mon-hcictrl01
>> quay.io/ceph/ceph:v16.2.7 -n mgr.hcictrl01....
>> ceph-6eaf15d8-8332-11ec-b820-0015171590ba-mgr-hcictrl01-vmdxbg
>> docker.io/ceph/daemon-base:latest-pacific-devel -n client.crash.h...
>> ceph-6eaf15d8-8332-11ec-b820-0015171590ba-crash-hcictrl01
>>
>> host add command used:
>> *ceph orch host add hcictrl02 *.*.*.* --labels _admin*
>>
>> containers in second host:
>> [root@hcictrl02 ~]# podman ps -a --format "{{.Image}} {{.Command}}
>> {{.Names}}"
>> docker.io/ceph/daemon-base:latest-pacific-devel -n client.crash.h...
>> ceph-6eaf15d8-8332-11ec-b820-0015171590ba-crash-hcictrl02
>> docker.io/ceph/daemon-base:latest-pacific-devel -n mgr.hcictrl02....
>> ceph-6eaf15d8-8332-11ec-b820-0015171590ba-mgr-hcictrl02-xfbcwn
>> docker.io/ceph/daemon-base:latest-pacific-devel -n mon.hcictrl02 ...
>> ceph-6eaf15d8-8332-11ec-b820-0015171590ba-mon-hcictrl02
>>
>>
>> However, does the following line have anything to do with this behaviour
>> of cephadm?
>>
>> https://github.com/ceph/ceph/blob/v16.2.7/src/common/options.cc#L459
>>
>> [root@hcictrl01 ~]# ceph-conf -D | grep -i container_image
>>
>> container_image = docker.io/ceph/daemon-base:latest-pacific-devel
>>
>>
>> Thanks and Regards,
>> Arun Vinod
>>
>>
>>
>> On Mon, 31 Jan 2022 at 22:25, Adam King <adking@xxxxxxxxxx> wrote:
>>
>>> Hi Arun,
>>>
>>> Not sure exactly how things got this way. When you provide "--image
>>> <image-name>" when bootstrapping that should set the image to be used for
>>> all ceph containers. I've never seen just the bootstrap mgr/mon get a
>>> totally different image. Would be interesting to maybe see the full
>>> bootstrap output here as this issue is new to me.
>>>
>>> As for resolving the issue, you should be able to use the upgrade
>>> procedure to get all the containers on the right image. Just "ceph orch
>>> upgrade start <image-name>" then just keep checking "ceph orch upgrade
>>> status" until it no longer says it's in progress. That should get all the
>>> ceph daemons on whatever image it is you specify in the upgrade start
>>> command and cause future ceph daemons to be deployed with that image as
>>> well.
>>>
>>> - Adam King
>>>
>>> On Mon, Jan 31, 2022 at 10:08 AM Arun Vinod <arunvinod.tech@xxxxxxxxx>
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> How can change the default behaviour of cepham to use stable container
>>>> images instead of default latest/devel images.
>>>>
>>>> By default when we try to bootstrap a cluster and add two additional
>>>> hosts
>>>> after bootstrap finished, dameons are created on two container images.
>>>> Which are, *quay.io/ceph/ceph:v16 <http://quay.io/ceph/ceph:v16>* and
>>>> *docker.io/ceph/daemon-base:latest-pacific-devel
>>>> <http://docker.io/ceph/daemon-base:latest-pacific-devel>*.
>>>>
>>>> The ceph:v16 image from quay.io is stable but the second image from
>>>> docker.io is not , due to the latest/devel tags which are basically
>>>> untested images according to ceph.
>>>>
>>>> How can we specify cephadm to use a stable version of image for
>>>> *daemon-base.*
>>>>
>>>> The following is the final list of containers created on the cluster
>>>> after
>>>> bootstrap is finished and another host added. Here the mon, mgr daemons
>>>> on
>>>> both hosts are running on different container image. Most importantly,
>>>> most
>>>> of the containers are running on latest-pacific-devel containers, which
>>>> is
>>>> not apt for a production cluster.
>>>>
>>>> In bootstrap node:
>>>> CONTAINER ID  IMAGE                                            COMMAND
>>>>           CREATED      STATUS          PORTS       NAMES
>>>> afeb6f92deb2  quay.io/ceph/ceph:v16.2.7                        -n
>>>> mon.hcictrl01 ...  4 hours ago  Up 4 hours ago
>>>>  ceph-e8200504-8287-11ec-a14f-0015171590ba-mon-hcictrl01
>>>> c43d48766a08  quay.io/ceph/ceph:v16.2.7                        -n
>>>> mgr.hcictrl01....  4 hours ago  Up 4 hours ago
>>>>  ceph-e8200504-8287-11ec-a14f-0015171590ba-mgr-hcictrl01-rmosyh
>>>> d70cea0fd561  docker.io/ceph/daemon-base:latest-pacific-devel  -n
>>>> client.crash.h...  4 hours ago  Up 4 hours ago
>>>>  ceph-e8200504-8287-11ec-a14f-0015171590ba-crash-hcictrl01
>>>>
>>>> In rest of the nodes:
>>>> CONTAINER ID  IMAGE                                            COMMAND
>>>>           CREATED        STATUS            PORTS       NAMES
>>>> d816ee470753  docker.io/ceph/daemon-base:latest-pacific-devel  -n
>>>> client.crash.h...  6 minutes ago  Up 6 minutes ago
>>>>  ceph-e8200504-8287-11ec-a14f-0015171590ba-crash-hcictrl02
>>>> dde4646f4819  docker.io/ceph/daemon-base:latest-pacific-devel  -n
>>>> mgr.hcictrl02....  6 minutes ago  Up 6 minutes ago
>>>>  ceph-e8200504-8287-11ec-a14f-0015171590ba-mgr-hcictrl02-hfhapx
>>>> 12191a039525  docker.io/ceph/daemon-base:latest-pacific-devel  -n
>>>> mon.hcictrl02 ...  6 minutes ago  Up 6 minutes ago
>>>>  ceph-e8200504-8287-11ec-a14f-0015171590ba-mon-hcictrl02
>>>>
>>>> Can someone help to explain how cepham chooses this default image or any
>>>> workaround to choose a specific image instead of devel images.
>>>>
>>>> Thanks in advance.
>>>> _______________________________________________
>>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>>>
>>>>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux