Re: ceph orch upgrade stuck between 16.2.7 and 16.2.13

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



literally minutes before your email popped up in my inbox I had announced that I would upgrade our cluster from 16.2.10 to 16.2.13 tomorrow. Now I'm hesitating. ;-) I guess I would start looking on the nodes where it failed to upgrade OSDs and check out the cephadm.log as well as syslog. Did you see progress messages in the mgr log for the successfully updated OSDs (or MON/MGR)?

Zitat von Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>:


A healthy 16.2.7 cluster should get an upgrade to 16.2.13.

ceph orch upgrade start --ceph-version 16.2.13

did upgrade MONs, MGRs and 25% of the OSDs and is now stuck.

We tried several "ceph orch upgrade stop" and starts again.
We "failed" the active MGR but no progress.
We set the debug logging with "ceph config set mgr mgr/cephadm/log_to_cluster_level debug" but it only tells that it starts:

2023-08-15T09:05:58.548896+0200 mgr.cephmon01 [INF] Upgrade: Started with target

How can we check what is happening (or not happening) here?
How do we get cephadm to complete the task?

Current status is:

# ceph orch upgrade status
    "target_image": "",
    "in_progress": true,
    "which": "Upgrading all daemon types on all hosts",
    "services_complete": [],
    "progress": "",
    "message": "",
    "is_paused": false

# ceph -s
    id:     3098199a-c7f5-4baf-901c-f178131be6f4
    health: HEALTH_WARN
            There are daemons running an older version of ceph
mon: 5 daemons, quorum cephmon02,cephmon01,cephmon03,cephmon04,cephmon05 (age 4d)
    mgr: cephmon03(active, since 8d), standbys: cephmon01, cephmon02
    mds: 2/2 daemons up, 1 standby, 2 hot standby
    osd: 202 osds: 202 up (since 11d), 202 in (since 13d)
    rgw: 2 daemons active (2 hosts, 1 zones)
    volumes: 2/2 healthy
    pools:   11 pools, 4961 pgs
    objects: 98.84M objects, 347 TiB
    usage:   988 TiB used, 1.3 PiB / 2.3 PiB avail
    pgs:     4942 active+clean
             19   active+clean+scrubbing+deep
    client:   89 MiB/s rd, 598 MiB/s wr, 25 op/s rd, 157 op/s wr
    Upgrade to (0s)

# ceph versions
    "mon": {
"ceph version 16.2.13 (5378749ba6be3a0868b51803968ee9cde4833a3e) pacific (stable)": 5
    "mgr": {
"ceph version 16.2.13 (5378749ba6be3a0868b51803968ee9cde4833a3e) pacific (stable)": 3
    "osd": {
"ceph version 16.2.13 (5378749ba6be3a0868b51803968ee9cde4833a3e) pacific (stable)": 48, "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)": 154
    "mds": {
"ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)": 5
    "rgw": {
"ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)": 2
    "overall": {
"ceph version 16.2.13 (5378749ba6be3a0868b51803968ee9cde4833a3e) pacific (stable)": 56, "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)": 161

Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux