Thanks, Mathew for the Update. The upgrade got failed for some random wired reasons, Checking further Ceph's status shows that "Ceph health is OK" and times it gives certain warnings but I think that is ok. but what if we see the Version mismatch between the daemons, i.e few services have upgraded and the remaining could not be upgraded. So in this state, we do two things: - Retrying the upgrade activity (to Pacific) - it might work this time. - Going back to the older Version (Octopus) - is this possible and if yes then how? *Other Query:* What if the complete cluster goes down, i.e mon crashes other daemon crashes, can we try to restore the data in OSDs, maybe by reusing the OSD's in another or new Ceph Cluster or something to save the data. Please suggest !! Best Regards, Lokendra On Fri, Sep 3, 2021 at 9:04 PM Matthew Vernon <mvernon@xxxxxxxxxxxxx> wrote: > On 02/09/2021 09:34, Lokendra Rathour wrote: > > > We have deployed the Ceph Octopus release using Ceph-Ansible. > > During the upgrade from Octopus to Pacific release we saw the upgrade got > > failed. > > I'm afraid you'll need to provide some more details (e.g. ceph -s > output) on the state of your cluster; I'd expect a cluster mid-upgrade > to still be operational, so you should still be able to access your OSDs. > > Regards, > > Matthew > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > -- ~ Lokendra www.inertiaspeaks.com www.inertiagroups.com skype: lokendrarathour _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx