Cephadm migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone !

We're operating a small cluster which contains 1 monitor-manager, 3 osds ans 1 RGW. Tjhe cluster was initially installed with ceph-deploy in version Nautilus (14.2.19) then upgraded in Octopus (15.2.16) and lastly in Pacific (16.2.9). Ceph-deploy does not work any more so we need to migrate the cluster in cephadm mode. We did it following the
official Ceph Doc.

Everything is getting OK, until it concerns mon, mgr and osd. But we get in great troubles
when migrating the RGW :

  - the RGW podman image is a very exotic version (see below)
  - the old service starts after a while, although having been stopped and removed as explained in the doc,
  - we never achieved to config of the new gateway with a yaml file.

Versions of the nodes after migrating :

 ##############
monitor / manager  : cephadm inspect-image
##############

{
    "ceph_version": "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)",     "image_id": "32214388de9de06e6f5a0a6aa9591ac10c72cbe1bdd751b792946d968cd502d6",
    "repo_digests": [
"quay.io/ceph/ceph@sha256:2b68483bcd050472a18e73389c0e1f3f70d34bb7abf733f692e88c935ea0a6bd",
"quay.io/ceph/ceph@sha256:3cd25ee2e1589bf534c24493ab12e27caf634725b4449d50408fd5ad4796bbfa"
    ]
}


 ##############
OSDs : cephadm inspect-image
##############

{
    "ceph_version": "ceph version 16.2.10 (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)",     "image_id": "32214388de9de06e6f5a0a6aa9591ac10c72cbe1bdd751b792946d968cd502d6",
    "repo_digests": [
"quay.io/ceph/ceph@sha256:2b68483bcd050472a18e73389c0e1f3f70d34bb7abf733f692e88c935ea0a6bd",
"quay.io/ceph/ceph@sha256:3cd25ee2e1589bf534c24493ab12e27caf634725b4449d50408fd5ad4796bbfa"
    ]
}

##############
RGW : cephadm inspect-image
##############

{
    "ceph_version": "ceph version 16.2.5-387-g7282d81d (7282d81d2c500b5b0e929c07971b72444c6ac424) pacific (stable)",     "image_id": "41387741ad94630f1c58b94fdba261df8d8e3dc2d4f70ad6201739764f43eb2c",
    "repo_digests": [
"docker.io/ceph/daemon-base@sha256:a038c6dc35064edff40bb7e824783f1bbd325c888e722ec5e814671406216ad5"
    ]
}

=============
Orchestrator
=============
ceph orch ps :
-------------
NAME                             HOST           PORTS STATUS        REFRESHED  AGE  MEM USE  MEM LIM  VERSION IMAGE ID      CONTAINER ID mgr.svtcephmonv1                 svtcephmonv1          running (5h)     4m ago   2d     341M        -  16.2.10 32214388de9d  0464e2e0c71b mon.svtcephmonv1                 svtcephmonv1          running (5h)     4m ago   2d     262M    2048M  16.2.10 32214388de9d  28aa77685767 osd.0                            svtcephosdv01         running (5h)     4m ago   2d     184M    4096M  16.2.10 32214388de9d  c5a4a1091cba osd.1                            svtcephosdv02         running (5h)     4m ago   2d     182M    4096M  16.2.10 32214388de9d  080c8f2b3eca osd.2                            svtcephosdv03         running (5h)     4m ago   2d     189M    4096M  16.2.10 32214388de9d  b58b549a932d osd.3                            svtcephosdv01         running (5h)     4m ago   2d     245M    4096M  16.2.10 32214388de9d  9d2f781ae290 osd.4                            svtcephosdv02         running (5h)     4m ago   2d     233M    4096M  16.2.10 32214388de9d  6296db28f1d4 osd.5                            svtcephosdv03         running (5h)     4m ago   2d     213M    4096M  16.2.10 32214388de9d  deb58248e520 rgw.testrgw.svtcephrgwv1.invwmo  svtcephrgwv1   *:80 error           67s ago  23h        -        - <unknown>  <unknown>     <unknown>
-------------
ceph orch host ls  :
-------------
HOST           ADDR           LABELS  STATUS
svtcephmonv1   192.168.90.51
svtcephosdv01  192.168.90.54
svtcephosdv02  192.168.90.55
svtcephosdv03  192.168.90.56
svtcephrgwv1   192.168.90.57  RGW
5 hosts in cluster

Every help will be welcome and we can send any information which would be convenient to solve the problem.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux