Possibly, Given where it stopped, it matches, however the output of cepg log last cephadm is rather empty, after I stop and restart the upgrade. I think I might have attempted to trouble shoot too much.... let me try a few more ideas. Peter. On Wed, 7 Apr 2021, 14:02 Sage Weil, <sage@xxxxxxxxxxxx> wrote: > Can you share the output of 'ceph log last cephadm'? I'm wondering if > you are hitting https://tracker.ceph.com/issues/50114 > > Thanks! > s > > On Mon, Apr 5, 2021 at 4:00 AM Peter Childs <pchilds@xxxxxxx> wrote: > > > > I am attempting to upgrade a Ceph Upgrade cluster that was deployed with > > Octopus 15.2.8 and upgraded to 15.2.10 successfully. I'm not attempting > to > > upgrade to 16.2.0 Pacific, and it is not going very well. > > > > I am using cephadm. It looks to have upgraded the managers and stopped, > > and not moved on to the monitors or anything else. I've attempted > stopping > > the upgrade and restarting it, with debug on and I'm not seeing anything > to > > say why it is not progressing any further. > > > > I've also tried rebooting machines and failing the managers over with > > no success. I'm currently thinking its stuck attempting to upgrade a > > manager that does not exist. > > > > Its a test cluster of 16 nodes, bit of a proof of concept, so if I've got > > something terribly wrong I'm happy to look at deploying, (running on top > of > > CentOS 7 but I'm fast heading to using something else) (apart from > anything > > its not really a production ready system yet) > > > > Just not sure where cephadm upgrade has crashed in 16.2.0 > > > > Thanks in advance > > > > Peter > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx