Re: Ceph upgrade advice - Luminous to Pacific with OS upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We went on a couple clusters from ceph-deploy+centos7+nautilus to cephadm+rocky8+pacific using ELevate as one of the steps. Went through octopus as well. ELevate wasn't perfect for us either, but was able to get the job done. Had to test it carefully on the test clusters multiple times to get the procedure just right. Had some bumps even then, but was able to get things finished up.

Thanks,
Kevin

________________________________________
From: Wolfpaw - Dale Corse <dale@xxxxxxxxxxx>
Sent: Tuesday, December 6, 2022 8:18 AM
To: 'David C'
Cc: 'ceph-users'
Subject:  Re: Ceph upgrade advice - Luminous to Pacific with OS upgrade

Check twice before you click! This email originated from outside PNNL.


Hi David,

  > Good to hear you had success with the ELevate tool, I'd looked at that but seemed a bit risky. The tool supports Rocky so I may give it a look.

Elevate wasn't perfect - we had to manually upgrade some packages from outside repos (ceph, opennebula and salt if memory serves). That said, it was certainly manageable.

> This one is surprising since in theory Pacific still supports Filestore, there is at least one thread on the list where someone upgraded to Pacific and is still running some Filestore OSDs -
> on the other hand, there's also a recent thread where someone ran into problems and was  forced to upgrade to Bluestore - did you experience issues yourself or was this advice you
> picked up? I do ultimately want to get all my OSDs on Bluestore but was hoping to do that after the Ceph version upgrade.

Sorry - I am mistaken about Rocks/LevelDB and Filestore upgrades being required for Pacific. Apologies!
I do remember doing all of ours when we upgraded from Luminous -> Nautilus, but I can't remember why to be honest. Might have been advice at the time, or something I read when looking into the upgrade :)

Cheers,
D.

-----Original Message-----
From: David C [mailto:dcsysengineer@xxxxxxxxx]
Sent: Tuesday, December 6, 2022 8:56 AM
To: Wolfpaw - Dale Corse <dale@xxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxx>
Subject: [SPAM]  Re: [SPAM] Ceph upgrade advice - Luminous to Pacific with OS upgrade

Hi Wolfpaw, thanks for the response

- Id upgrade to Nautilus on Centos 7 before moving to EL8. We then used
> AlmaLinux Elevate to love from 7 to 8 without a reinstall. Rocky has a
> similar path I think.
>

Good to hear you had success with the ELevate tool, I'd looked at that but seemed a bit risky. The tool supports Rocky so I may give it a look.

>
> - you will need to love those filestore OSD’s to Bluestore before
> hitting Pacific, might even be part of the Nautilus upgrade. This
> takes some time if I remember correctly.
>

This one is surprising since in theory Pacific still supports Filestore, there is at least one thread on the list where someone upgraded to Pacific and is still running some Filestore OSDs - on the other hand, there's also a recent thread where someone ran into problems and was forced to upgrade to Bluestore - did you experience issues yourself or was this advice you picked up? I do ultimately want to get all my OSDs on Bluestore but was hoping to do that after the Ceph version upgrade.


> - You may need to upgrade monitors to RocksDB too.


Thanks, I wasn't aware of this  - I suppose I'll do that when I'm on Nautilus


On Tue, Dec 6, 2022 at 3:22 PM Wolfpaw - Dale Corse <dale@xxxxxxxxxxx>
wrote:

> We did this (over a longer timespan).. it worked ok.
>
> A couple things I’d add:
>
> - Id upgrade to Nautilus on Centos 7 before moving to EL8. We then
> used AlmaLinux Elevate to love from 7 to 8 without a reinstall. Rocky
> has a similar path I think.
>
> - you will need to love those filestore OSD’s to Bluestore before
> hitting Pacific, might even be part of the Nautilus upgrade. This
> takes some time if I remember correctly.
>
> - You may need to upgrade monitors to RocksDB too.
>
> Sent from my iPhone
>
> > On Dec 6, 2022, at 7:59 AM, David C <dcsysengineer@xxxxxxxxx> wrote:
> >
> > Hi All
> >
> > I'm planning to upgrade a Luminous 12.2.10 cluster to Pacific
> > 16.2.10, cluster is primarily used for CephFS, mix of Filestore and
> > Bluestore OSDs, mons/osds collocated, running on CentOS 7 nodes
> >
> > My proposed upgrade path is: Upgrade to Nautilus 14.2.22 -> Upgrade
> > to
> > EL8 on the nodes (probably Rocky) -> Upgrade to Pacific
> >
> > I assume the cleanest way to update the node OS would be to drain
> > the node and remove from the cluster, install Rocky 8, add back to
> > cluster as effectively a new node
> >
> > I have a relatively short maintenance window and was hoping to speed
> > up OS upgrade with the following approach on each node:
> >
> > - back up ceph config/systemd files etc.
> > - set noout etc.
> > - deploy Rocky 8, being careful not to touch OSD block devices
> > - install Nautilus binaries (ensuring I use same version as pre OS
> upgrade)
> > - copy ceph config back over
> >
> > In theory I could then start up the daemons and they wouldn't care
> > that we're now running on a different OS
> >
> > Does anyone see any issues with that approach? I plan to test on a
> > dev cluster anyway but would be grateful for any thoughts
> >
> > Thanks,
> > David
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
> > email to ceph-users-leave@xxxxxxx
> >
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux