Re: Old MDS container version when: Ceph orch apply mds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for noting this, I just imported our last cluster and couldn't get
ceph-exporter to start.  I noticed that the images it was using for
node-exporter and ceph-exporter were not the same as the other clusters!
Wish this was in the adoption documentation.  I have a running list of all
the things I must add/do when adopting a cluster... just another one on the
list!

Thanks again!

-Brent

-----Original Message-----
From: Eugen Block <eblock@xxxxxx> 
Sent: Friday, August 2, 2024 3:02 AM
To: ceph-users@xxxxxxx
Subject:  Re: Old MDS container version when: Ceph orch apply
mds

Hi,

it sounds like the mds container_image is not configured properly, you can
set it via:

ceph config set mds container_image quay.io/ceph/ceph:v18.2.2

or just set it globally for all ceph daemons:

ceph config set global container_image quay.io/ceph/ceph:v18.2.2

If you bootstrap a fresh cluster, the image is set globally for you, but it
doesn't do that during an upgrade from a non-cephadm cluster, which requires
to redeploy mds daemons.

Regards,
Eugen


Zitat von opositorvlc@xxxxxxxx:

> Hi All,
> I migrated my CEPH 18.2.2 cluster from a non cephadm configuration.  
> All goes fine except MDS service was deployed in a old version: 17.0.0 
> I'm trying to deploy  MDS daemons using ceph orch but CEPH always 
> download an old MDS image from docker.
>
> How could I deploy the MDS service in the same 18.2.2 version that the 
> rest of services?
>
> [root@master1 ~]# ceph orch apply mds datafs --placement="2 master1
master2"
>
> [root@master1 ~]# ceph orch ps
> NAME                       HOST     PORTS  STATUS         REFRESHED   
> AGE  MEM USE  MEM LIM  VERSION                IMAGE ID       
> CONTAINER ID
> mds.datafs.master1.gcpovr  master1         running (36m)     6m ago   
> 36m    37.2M        -  17.0.0-7183-g54142666  75e3d7089cea   
> 96682779c7ad
> mds.datafs.master2.oqaxuy  master2         running (36m)     6m ago   
> 36m    33.1M        -  17.0.0-7183-g54142666  75e3d7089cea   
> a9a647f87c83
> mgr.master                 master1         running (16h)     6m ago   
> 17h     448M        -  18.2.2                 3c937764e6f5   
> 70f06fa05b70
> mgr.master2                master2         running (16h)     6m ago   
> 17h     524M        -  18.2.2                 3c937764e6f5   
> 2d0d5376d8b3
> mon.master                 master1         running (16h)     6m ago   
> 17h     384M    2048M  18.2.2                 3c937764e6f5   
> 66a65017ce29
> mon.master2                master2         running (16h)     6m ago   
> 17h     380M    2048M  18.2.2                 3c937764e6f5   
> 51d783a9e36c
> osd.0                      osd00           running (16h)     3m ago   
> 17h     432M    4096M  18.2.2                 3c937764e6f5   
> fedff66f5ed2
> osd.1                      osd00           running (16h)     3m ago   
> 17h     475M    4096M  18.2.2                 3c937764e6f5   
> 24e24a1a22e6
> osd.2                      osd00           running (16h)     3m ago   
> 17h     516M    4096M  18.2.2                 3c937764e6f5   
> ccd05451b739
> osd.3                      osd00           running (16h)     3m ago   
> 17h     454M    4096M  18.2.2                 3c937764e6f5   
> f6d8f13c8aaf
> osd.4                      master1         running (16h)     6m ago   
> 17h     525M    4096M  18.2.2                 3c937764e6f5   
> a2dcf9f1a9b7
> osd.5                      master2         running (16h)     6m ago   
> 17h     331M    4096M  18.2.2                 3c937764e6f5   
> b0011e8561a4
>
> [root@master1 ~]# ceph orch ls
> NAME        PORTS  RUNNING  REFRESHED  AGE  PLACEMENT
> mds.datafs             2/2  6m ago     46s  master1;master2;count:2
> mgr                    2/0  6m ago     -    <unmanaged>
> mon                    2/0  6m ago     -    <unmanaged>
> osd                      6  6m ago     -    <unmanaged>
>
> [root@master1 ~]# ceph versions
> {
>     "mon": {
>         "ceph version 18.2.2
> (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)": 2
>     },
>     "mgr": {
>         "ceph version 18.2.2
> (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)": 2
>     },
>     "osd": {
>         "ceph version 18.2.2
> (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)": 6
>     },
>     "mds": {
>         "ceph version 17.0.0-7183-g54142666
> (54142666e5705ced88e3e2d91ddc0ff29867a362) quincy (dev)": 2
>     },
>     "overall": {
>         "ceph version 17.0.0-7183-g54142666
> (54142666e5705ced88e3e2d91ddc0ff29867a362) quincy (dev)": 2,
>         "ceph version 18.2.2
> (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)": 10
>     }
> }
>
> [root@master1 ~]# podman images
> REPOSITORY                        TAG                  IMAGE ID       
> CREATED        SIZE
> quay.io/ceph/ceph                 v18.2.2              3c937764e6f5   
> 7 weeks ago    1.28 GB
> quay.io/ceph/ceph                 v18                  3c937764e6f5   
> 7 weeks ago    1.28 GB
> registry.access.redhat.com/ubi8   latest               c70d72aaebb4   
> 3 months ago   212 MB
> quay.io/ceph/ceph                 v16                  0d668911f040   
> 23 months ago  1.27 GB
> quay.io/ceph/ceph-grafana         8.3.5                dad864ee21e9   
> 2 years ago    571 MB
> quay.io/prometheus/prometheus     v2.33.4              514e6a882f6e   
> 2 years ago    205 MB
> quay.io/prometheus/node-exporter  v1.3.1               1dbe0e931976   
> 2 years ago    22.3 MB
> quay.io/prometheus/alertmanager   v0.23.0              ba2b418f427c   
> 2 years ago    58.9 MB
> docker.io/ceph/daemon-base        latest-master-devel  75e3d7089cea   
> 2 years ago    1.29 GB
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux