Re: [EXTERNAL] Re: Ceph Upgrade path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We had a similar challenge getting from Nautilus (ceph-deploy) / Xenial (ubuntu 16.04) to Pacific (cephadm) / Focal (ubuntu 20.04), as the pacific packages were not available for Xenial and Nautilus was not available for Focal.  Our method was to upgrade and cephadm adopt mons/mgrs/rgws all before making any changes to OSD nodes as follows:


  1.  Rebuild (one at a time) all 5 mon/mgr/rgw hosts as Bionic (ubuntu 18.04) and re-add to cluster as Nautilus with ceph-deploy
  2.  Upgrade mon/mgr/rgw services from Nautilus to Pacific with apt (standard package manager) method
  3.  Follow cephadm adopt procedures for mon/mgr/rgw hosts - https://docs.ceph.com/en/pacific/cephadm/adoption/
  4.  Rebuild (one at a time) all 5 mon/mgr/rgw hosts as Focal and re-add to cluster as Pacific with orchestrator / cephadm
  5.  Drain and then rebuild all OSD hosts 1-4 at a time as Focal and re-add to cluster as Pacific
     *   https://docs.ceph.com/en/pacific/rados/operations/bluestore-migration/#migration-process except using orchestrator / cephadm to build the “$NEWHOST”

Thank you,
Josh Beaman

From: Fox, Kevin M <Kevin.Fox@xxxxxxxx>
Date: Wednesday, February 1, 2023 at 11:11 AM
To: Iztok Gregori <iztok.gregori@xxxxxxxxxx>, ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject: [EXTERNAL]  Re: Ceph Upgrade path
We successfully did ceph-deploy+octopus+centos7 -> (ceph-deploy unsupported)+octopus+centos8stream (using leap) -> (ceph-deploy unsupported)+pacific+centos8stream  -> cephadm+pacific+centos8stream

Everything in place. Leap was tested repeatedly till the procedure/sideeffects were very well known.

We also did s/centos8stream/rocky8/ successfully.

Thanks,
Kevin

________________________________________
From: Iztok Gregori <iztok.gregori@xxxxxxxxxx>
Sent: Wednesday, February 1, 2023 3:51 AM
To: ceph-users@xxxxxxx
Subject:  Ceph Upgrade path

Check twice before you click! This email originated from outside PNNL.


Hi to all!

We are running a Ceph cluster (Octopus) on (99%) CentOS 7 (deployed at
the time with ceph-deploy) and we would like to upgrade it. As far as I
know for Pacific (and later releases) there aren't packages for CentOS 7
distribution (at least not on download.ceph.com), so we need to upgrade
(change) not only Ceph but also the distribution.

What is the raccomended path to do so?

We could upgrade (reinstall) all the nodes to Rocky 8 and then upgrade
Ceph to Quincy, but we will "stuck" with "not the latest" distribution
and probably we will have to upgrade (reinstall) again in the near future.

Our second idea is to leverage cephadm (which we would like to
implement) and switch from rpms to containers, but I don't have a clear
vision of how to do it. I was thinking to:

1. install a new monitor/manager with Rocky 9.
2. prepare the node for cephadm.
3. start the manager/monitor containers on that node.
4. repeat for the other monitors.
5. repeat for the OSD servers.

I'm not sure how to execute the point 2 and 3. The documentation says
how to bootstrap a NEW cluster and how to ADOPT an existing one, but our
situation is a hybrid (or in my mind it is...).

I cannot also adopt my current cluster to cephadm because we have 30% of
our OSD still on filestore. My intention was to drain them, reinstall
them and then adopt them. But I would like to avoid (if not necessary)
multiple reinstallations. In my mind all the OSD servers will be drained
before been reinstalled, just to be sure to have a "fresh" start).

Have you any ideas and/or advice to give us?


Thanks a lot!
Iztok

P.S. I saw that the script cephadm doesn't support Rocky, I can modify
it to do so and it should work, but is there a plan to officially
support it?



--
Iztok Gregori
ICT Systems and Services
Elettra - Sincrotrone Trieste S.C.p.A.
Telephone: +39 040 3758948
https://gcc02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.elettra.eu%2F&data=05%7C01%7Ckevin.fox%40pnnl.gov%7Cd68621f5c9db4ff0375808db044b7927%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638108494454332936%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=GF36e%2FEPgroA%2FE9x1LYCjk42%2BLOH15yAAxc%2BRoqf%2B7g%3D&reserved=0
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux