Hi Nathan, Should work, as long as you have two MGRs deployed. Please have a look at ceph config set mgr mgr/mgr_standby_modules = False Best, Sebastian Am 08.01.22 um 17:44 schrieb Nathan McGuire: > Hello! > > I'm running into an issue with upgrading Cephadm v15 to v16 on a single host. I've found a recent discussion at https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/WGALKHM5ZVS32IX7AVHU2TN76JTRVCRY/ and have manually updated the unit.run to pull the v16.2.0 image for mgr but other services are still running on v15. > > NAME HOST STATUS REFRESHED AGE PORTS VERSION IMAGE ID CONTAINER ID > alertmanager.prod1 prod1 running (68m) 2m ago 9M - 0.20.0 0881eb8f169f 1d076486c019 > crash.prod1 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 ffa06d65577a > mds.cephfs.prod1.awlcoq prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 21e0cbb21ee4 > mgr.prod1.bxenuc prod1 running (59m) 2m ago 9M - 16.2.0 24ecd6d5f14c cf0a7d5af51d > mon.prod1 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 1d1a0cba5414 > node-exporter.prod1 prod1 running (68m) 2m ago 9M - 0.18.1 e5a616e4b9cf 41ec9f0fcfb1 > osd.0 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 353d308ecc6e > osd.1 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 2ccc28d5aa3e > osd.2 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 a98009d4726e > osd.3 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 aa8f84c6edb5 > osd.4 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 ccbc89a0a41c > osd.5 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 c6cd024f2f73 > osd.6 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 e38ff4a66c7c > osd.7 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 55ce0bcfa0e3 > osd.8 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 ac6c0c8eaac8 > osd.9 prod1 running (68m) 2m ago 9M - 15.2.13 2cf504fded39 f5978d39b51d > prometheus.prod1 prod1 running (68m) 2m ago 9M - 2.18.1 de242295e225 d974a83515fd > > Any ideas on how to get the rest of the cluster to v16 besides just mgr? > Thanks! > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx >
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx