On 12/6/22 15:58, David C wrote:
Hi All
I'm planning to upgrade a Luminous 12.2.10 cluster to Pacific 16.2.10,
cluster is primarily used for CephFS, mix of Filestore and Bluestore
OSDs, mons/osds collocated, running on CentOS 7 nodes
My proposed upgrade path is: Upgrade to Nautilus 14.2.22 -> Upgrade to
EL8 on the nodes (probably Rocky) -> Upgrade to Pacific
I assume the cleanest way to update the node OS would be to drain the
node and remove from the cluster, install Rocky 8, add back to cluster
as effectively a new node
I have a relatively short maintenance window and was hoping to speed
up OS upgrade with the following approach on each node:
- back up ceph config/systemd files etc.
- set noout etc.
- deploy Rocky 8, being careful not to touch OSD block devices
- install Nautilus binaries (ensuring I use same version as pre OS upgrade)
- copy ceph config back over
In theory I could then start up the daemons and they wouldn't care
that we're now running on a different OS
Does anyone see any issues with that approach? I plan to test on a dev
cluster anyway but would be grateful for any thoughts
That would work. Just run:
systemctl enable ceph-osd.target
ceph-volume lvm activate --all
on them and you should be good to go. I have done re-install from 16.04
to 20.04 this way and that just worked (TM).
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx