Re: Ceph upgrade advice - Luminous to Pacific with OS upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If it can help, I have recently updated my ceph cluster (composed by 3
mon-mgr nodes and n osd nodes) from Nautilus CentOS7 to Pacific Centos8
stream.

First I reinstalled the mon-mgr nodes with Centos8 stream (removing them
from the cluster and then re-adding them with the new operating system).
This was needed because the mgr on octopus runs only on rhel8 and its forks

Then I migrated the cluster to Octopus (so mon-mgr running C8stream and osd
nodes running centos7)

Then I reinstalled each OSD node with Centos8 Stream, without draining the
node  [*]

Then I migrated the cluster from Octopus to Pacific

[*]
ceph osd set noout
Reinstallation of the node with the C8stream
Installation of ceph
ceph-volume lvm activate --all


Cheers, Massimo


On Tue, Dec 6, 2022 at 3:58 PM David C <dcsysengineer@xxxxxxxxx> wrote:

> Hi All
>
> I'm planning to upgrade a Luminous 12.2.10 cluster to Pacific 16.2.10,
> cluster is primarily used for CephFS, mix of Filestore and Bluestore
> OSDs, mons/osds collocated, running on CentOS 7 nodes
>
> My proposed upgrade path is: Upgrade to Nautilus 14.2.22 -> Upgrade to
> EL8 on the nodes (probably Rocky) -> Upgrade to Pacific
>
> I assume the cleanest way to update the node OS would be to drain the
> node and remove from the cluster, install Rocky 8, add back to cluster
> as effectively a new node
>
> I have a relatively short maintenance window and was hoping to speed
> up OS upgrade with the following approach on each node:
>
> - back up ceph config/systemd files etc.
> - set noout etc.
> - deploy Rocky 8, being careful not to touch OSD block devices
> - install Nautilus binaries (ensuring I use same version as pre OS upgrade)
> - copy ceph config back over
>
> In theory I could then start up the daemons and they wouldn't care
> that we're now running on a different OS
>
> Does anyone see any issues with that approach? I plan to test on a dev
> cluster anyway but would be grateful for any thoughts
>
> Thanks,
> David
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux