Re: Upgrading OS [and ceph release] nondestructively for oldish Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Sam,

i did start with an Ceph Jewel and Centos 7 (POC) cluster in mid 2017 now
successfully running latest Quincy version 17.2.6 in production.  BUT, we
had to do a recreation of all OSDs (DB/WAL)  from Filstore to Bluestore and
later once again for Centos 8 host migration.  :-/

Major step stones: Jewel > Luminous > Nautilus > Octopus (on Centos 8 later
Rocky 8) > Quincy (non-cephadm) > Quincy (cephadm)

- Change from Centos 7 to Centos 8 by complete new install one-by-one host
and temporary use of Centos 8 VMs for mon/mgr/mds
- Upgrade from Centos 8  Rocky 8 via  Upgrade script (ceph-volume package
was removed, so hat to reinstall)
- After adoption to "cephadm" Cluster we need to run cephadm
check-host ... manually/rc.local
after host reboot

I think it's quite tricky to go to Rocky 9 hosts directly because of
missing RPM's in https://download.ceph.com/rpm-pacific/el9/

Maybe that helps to find a proper upgrade path.

Christoph


Am Do., 7. Sept. 2023 um 17:53 Uhr schrieb Sam Skipsey <aoanla@xxxxxxxxx>:

> Hello all,
>
> We've had a Nautilus [latest releases] cluster for some years now, and are
> planning the upgrade process - both moving off Centos7 [ideally to a RHEL9
> compatible spin like Alma 9 or Rocky 9] and also moving to a newer Ceph
> release [ideally Pacific or higher to avoid too many later upgrades needed].
>
> As far as ceph release upgrades go, I understand the process in general.
>
> What I'm less certain about (and more nervous about from a potential data
> loss perspective) is the OS upgrade.
> For Ceph bluestore OSDs, I assume all the relevant metadata is on the OSD
> disk [or on the separate disk configured for RocksDB etc if you have nvme],
> and none is on the OS itself?
> For Mons and Mgrs, what stuff do I need to retain across the OS upgrade to
> have things "just work" [since they're relatively stateless, I assume
> mostly the /etc/ceph/ stuff and ceph cluster keys?]
> For the MDS, I assume it's similar to MGRS? The MDS, IIRC, mainly works as
> a caching layer so I assume there's not much state that can be lost
> permanently?
>
> Has anyone gone through this process who would be happy to share their
> experience? (There's not a lot on this on the wider internet - lots on
> upgrading ceph, much less on the OS)
>
> Sam
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux