Re: how to upgrade host os under ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Anthony

On 27/10/2022 21:44, Anthony D'Atri wrote:
Another factor is “Do I *really* need to upgrade the OS?”

that's a good question, opinions vary on this I've noticed ;-)


If you have org-wide management/configuration that requires you to upgrade, that’s one thing, but presumably your hosts are not accessible from the outside, so do you have a compelling reason?  The “immutable infrastructure” folks may be on to something.  Upgrades always take a lot of engineer time and are when things tend to go wrong.


Obviously the ceph nodes are not publicly accessible, but we do like to keep the cluster as maintainable as possible by keeping things simple. Having an older, unsupported ubuntu version around is kind of a red flag, even though it could be fine to remain so. And of course there's the problem that we want to keep ceph not too far behind supported releases, and at some point (before the hardware is expiring) no new versions of ceph is available for the older unsupported ubuntu.

Furthermore, waiting until this happens is a recipe for having to re-invent the wheel, I believe we should get practice and comfortable doing this, so it's not such a looming big issue. Also useful to have in our fingers when e.g. the OS disk fails for some reason.

So that would be my reason to still want to upgrade, even though there may not be an urgent reason...

Cheers

/Simon

On Oct 27, 2022, at 03:16, Simon Oosthoek <s.oosthoek@xxxxxxxxxxxxx> wrote:

Dear list

thanks for the answers, it looks like we have worried about this far too much ;-)

Cheers

/Simon

On 26/10/2022 22:21, shubjero wrote:
We've done 14.04 -> 16.04 -> 18.04 -> 20.04 all at various stages of our ceph cluster life.
The latest 18.04 to 20.04 was painless and we ran:
|apt update && apt dist-upgrade -y -o Dpkg::Options::=\"--force-confdef\" -o Dpkg::Options::=\"--force-confold\"|
|do||-release-upgrade --allow-third-party -f DistUpgradeViewNonInteractive|
|
|
On Wed, Oct 26, 2022 at 11:17 AM Reed Dier <reed.dier@xxxxxxxxxxx <mailto:reed.dier@xxxxxxxxxxx>> wrote:
    You should be able to `do-release-upgrade` from bionic/18 to focal/20.
    Octopus/15 is shipped for both dists from ceph.
    Its been a while since I did this, the release upgrader might
    disable the ceph repo, and uninstall the ceph* packages.
    However, the OSDs should still be there, re-enable the ceph repo,
    install ceph-osd, and then `ceph-volume lvm activate —all` should
    find and start all of the OSDs.
    Caveat, if you’re using cephadm, I’m sure the process is different.
    And also, if you’re trying to go to jammy/22, thats a different
    story, because ceph isn’t shipping packages for jammy yet for any
    version of ceph.
    I assume that they are going to ship quincy for jammy at some point,
    which will give a stepping stone from focal to jammy with the quincy
    release, because I don’t imagine that there will be a reef release
    for focal.
    Reed
     > On Oct 26, 2022, at 9:14 AM, Simon Oosthoek
    <s.oosthoek@xxxxxxxxxxxxx <mailto:s.oosthoek@xxxxxxxxxxxxx>> wrote:
     >
     > Dear list,
     >
     > I'm looking for some guide or pointers to how people upgrade the
    underlying host OS in a ceph cluster (if this is the right way to
    proceed, I don't even know...)
     >
     > Our cluster is nearing the 4.5 years of age and now our ubuntu
    18.04 is nearing the end of support date. We have a mixed cluster of
    u18 and u20 nodes, all running octopus at the moment.
     >
     > We would like to upgrade the OS on the nodes, without changing
    the ceph version for now (or per se).
     >
     > Is it as easy as installing a new OS version, installing the
    ceph-osd package and a correct ceph.conf file and restoring the host
    key?
     >
     > Or is more needed regarding the specifics of the OSD
    disks/WAL/journal?
     >
     > Or is it necessary to drain a node of all data and re-add the
    OSDs as new units? (This would be too much work, so I doubt it ;-)
     >
     > The problem with searching for information about this, is that it
    seems undocumented in the ceph documentation, and search results are
    flooded with ceph version upgrades.
     >
     > Cheers
     >
     > /Simon
     > _______________________________________________
     > ceph-users mailing list -- ceph-users@xxxxxxx
    <mailto:ceph-users@xxxxxxx>
     > To unsubscribe send an email to ceph-users-leave@xxxxxxx
    <mailto:ceph-users-leave@xxxxxxx>
    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    <mailto:ceph-users@xxxxxxx>
    To unsubscribe send an email to ceph-users-leave@xxxxxxx
    <mailto:ceph-users-leave@xxxxxxx>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux