Re: Single Node Cephadm Upgrade to Pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sebastian,

Even though mgr is reporting 16.2.0, I'm unable to use mgr_standby_modules for some reason.

root@prod1:~# ceph config set mgr mgr/cephadm/mgr_standby_modules false
Error EINVAL: unrecognized config option 'mgr/cephadm/mgr_standby_modules'
root@prod1:~# ceph mgr module enable mgr_standby_modules
Error ENOENT: all mgr daemons do not support module 'mgr_standby_modules', pass --force to force enablement

I have placed this configuration in the /etc/ceph/ceph.conf and restarted the cluster but now have no mgr.

  cluster:
    id:     06f2d076-91d2-11eb-98b0-91a523621af4
    health: HEALTH_WARN
            no active mgr

  services:
    mon: 1 daemons, quorum prod1 (age 84s)
    mgr: no daemons active (since 39s)
    mds: cephfs:1 {0=cephfs.prod1.awlcoq=up:active} 1 up:standby
    osd: 10 osds: 10 up (since 53s), 10 in (since 2M)

  data:
    pools:   4 pools, 97 pgs
    objects: 9.19M objects, 35 TiB
    usage:   44 TiB used, 65 TiB / 109 TiB avail
    pgs:     97 active+clean

Not really sure where to go now besides just staying on v15. I greatly appreciate your help!
Nathan

________________________________
From: Sebastian Wagner
Sent: Monday, January 10, 2022 5:01 AM
To: Nathan McGuire; ceph-users@xxxxxxx
Subject: Re:  Single Node Cephadm Upgrade to Pacific

Hi Nathan,

Should work, as long as you have two MGRs deployed. Please have a look at

ceph config set mgr mgr/mgr_standby_modules = False

Best,
Sebastian

Am 08.01.22 um 17:44 schrieb Nathan McGuire:
> Hello!
>
> I'm running into an issue with upgrading Cephadm v15 to v16 on a single host. I've found a recent discussion at https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/WGALKHM5ZVS32IX7AVHU2TN76JTRVCRY/ and have manually updated the unit.run to pull the v16.2.0 image for mgr but other services are still running on v15.
>
> NAME                     HOST   STATUS         REFRESHED  AGE  PORTS  VERSION  IMAGE ID      CONTAINER ID
> alertmanager.prod1       prod1  running (68m)  2m ago     9M   -      0.20.0   0881eb8f169f  1d076486c019
> crash.prod1              prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  ffa06d65577a
> mds.cephfs.prod1.awlcoq  prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  21e0cbb21ee4
> mgr.prod1.bxenuc         prod1  running (59m)  2m ago     9M   -      16.2.0   24ecd6d5f14c  cf0a7d5af51d
> mon.prod1                prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  1d1a0cba5414
> node-exporter.prod1      prod1  running (68m)  2m ago     9M   -      0.18.1   e5a616e4b9cf  41ec9f0fcfb1
> osd.0                    prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  353d308ecc6e
> osd.1                    prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  2ccc28d5aa3e
> osd.2                    prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  a98009d4726e
> osd.3                    prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  aa8f84c6edb5
> osd.4                    prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  ccbc89a0a41c
> osd.5                    prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  c6cd024f2f73
> osd.6                    prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  e38ff4a66c7c
> osd.7                    prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  55ce0bcfa0e3
> osd.8                    prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  ac6c0c8eaac8
> osd.9                    prod1  running (68m)  2m ago     9M   -      15.2.13  2cf504fded39  f5978d39b51d
> prometheus.prod1         prod1  running (68m)  2m ago     9M   -      2.18.1   de242295e225  d974a83515fd
>
> Any ideas on how to get the rest of the cluster to v16 besides just mgr?
> Thanks!
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux