Re: [SPAM] Ceph upgrade advice - Luminous to Pacific with OS upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We did this (over a longer timespan).. it worked ok.

A couple things I’d add:

- Id upgrade to Nautilus on Centos 7 before moving to EL8. We then used AlmaLinux Elevate to love from 7 to 8 without a reinstall. Rocky has a similar path I think.

- you will need to love those filestore OSD’s to Bluestore before hitting Pacific, might even be part of the Nautilus upgrade. This takes some time if I remember correctly. 

- You may need to upgrade monitors to RocksDB too.

Sent from my iPhone

> On Dec 6, 2022, at 7:59 AM, David C <dcsysengineer@xxxxxxxxx> wrote:
> 
> Hi All
> 
> I'm planning to upgrade a Luminous 12.2.10 cluster to Pacific 16.2.10,
> cluster is primarily used for CephFS, mix of Filestore and Bluestore
> OSDs, mons/osds collocated, running on CentOS 7 nodes
> 
> My proposed upgrade path is: Upgrade to Nautilus 14.2.22 -> Upgrade to
> EL8 on the nodes (probably Rocky) -> Upgrade to Pacific
> 
> I assume the cleanest way to update the node OS would be to drain the
> node and remove from the cluster, install Rocky 8, add back to cluster
> as effectively a new node
> 
> I have a relatively short maintenance window and was hoping to speed
> up OS upgrade with the following approach on each node:
> 
> - back up ceph config/systemd files etc.
> - set noout etc.
> - deploy Rocky 8, being careful not to touch OSD block devices
> - install Nautilus binaries (ensuring I use same version as pre OS upgrade)
> - copy ceph config back over
> 
> In theory I could then start up the daemons and they wouldn't care
> that we're now running on a different OS
> 
> Does anyone see any issues with that approach? I plan to test on a dev
> cluster anyway but would be grateful for any thoughts
> 
> Thanks,
> David
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux