Dear list
thanks for the answers, it looks like we have worried about this far too
much ;-)
Cheers
/Simon
On 26/10/2022 22:21, shubjero wrote:
We've done 14.04 -> 16.04 -> 18.04 -> 20.04 all at various stages of our
ceph cluster life.
The latest 18.04 to 20.04 was painless and we ran:
|apt update && apt dist-upgrade -y -o
Dpkg::Options::=\"--force-confdef\" -o Dpkg::Options::=\"--force-confold\"|
|do||-release-upgrade --allow-third-party -f DistUpgradeViewNonInteractive|
|
|
On Wed, Oct 26, 2022 at 11:17 AM Reed Dier <reed.dier@xxxxxxxxxxx
<mailto:reed.dier@xxxxxxxxxxx>> wrote:
You should be able to `do-release-upgrade` from bionic/18 to focal/20.
Octopus/15 is shipped for both dists from ceph.
Its been a while since I did this, the release upgrader might
disable the ceph repo, and uninstall the ceph* packages.
However, the OSDs should still be there, re-enable the ceph repo,
install ceph-osd, and then `ceph-volume lvm activate —all` should
find and start all of the OSDs.
Caveat, if you’re using cephadm, I’m sure the process is different.
And also, if you’re trying to go to jammy/22, thats a different
story, because ceph isn’t shipping packages for jammy yet for any
version of ceph.
I assume that they are going to ship quincy for jammy at some point,
which will give a stepping stone from focal to jammy with the quincy
release, because I don’t imagine that there will be a reef release
for focal.
Reed
> On Oct 26, 2022, at 9:14 AM, Simon Oosthoek
<s.oosthoek@xxxxxxxxxxxxx <mailto:s.oosthoek@xxxxxxxxxxxxx>> wrote:
>
> Dear list,
>
> I'm looking for some guide or pointers to how people upgrade the
underlying host OS in a ceph cluster (if this is the right way to
proceed, I don't even know...)
>
> Our cluster is nearing the 4.5 years of age and now our ubuntu
18.04 is nearing the end of support date. We have a mixed cluster of
u18 and u20 nodes, all running octopus at the moment.
>
> We would like to upgrade the OS on the nodes, without changing
the ceph version for now (or per se).
>
> Is it as easy as installing a new OS version, installing the
ceph-osd package and a correct ceph.conf file and restoring the host
key?
>
> Or is more needed regarding the specifics of the OSD
disks/WAL/journal?
>
> Or is it necessary to drain a node of all data and re-add the
OSDs as new units? (This would be too much work, so I doubt it ;-)
>
> The problem with searching for information about this, is that it
seems undocumented in the ceph documentation, and search results are
flooded with ceph version upgrades.
>
> Cheers
>
> /Simon
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx