Re: Upgrading From RHCS v4 to OSS Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 13-11-2023 22:35, jarulsam@xxxxxxxx wrote:
Hi everyone,

I have a storage cluster running RHCS v4 (old, I know) and am looking to upgrade
it soon. I would also like to migrate from RHCS to the open source version of
Ceph at some point, as our support contract with RedHat for Ceph is likely going
to not be renewed going forward.

I was wondering if anyone has any advice as to how to upgrade our cluster with
minimal production impact. I have the following server configuration:

We just finished doing pretty much this, albeit starting on RHEL 8. I don't think we had downtime at any point.


   + 3x monitors
   + 3x metadata servers
   + 2x RadosGWs with 2x servers running HAproxy and keepalived for HA RadosGWs.
   + 19x OSDs - 110TB HDD and 1TB NVMe each. (Total ~2.1PB raw)

Currently, I have RHCS v4 installed baremetal on RHEL 7. I see that newer
versions of Ceph require containerized deployments so I am thinking it is best
to first migrate to a containerized installation then try and upgrade everything
else.

My first inclination is to do the upgrade like this:

   1. Move existing installation to containerized, maintain all the same versions
      and OS installations.

   2. Pull one monitor, fresh reinstall RHEL 9, reinstall RHCS v4, readd to
      cluster. Repeat for all the monitors.

   3. Pull one MDS, do the same as step 2 but for MDS.

   4. Pull one RadosGW, do the same as step 2 but for RadosGW.

   5. Pull one OSD, rebalance, fresh reinstall RHEL 9, reinstall RHCS v4, readd
      to cluster, rebalance. Repeat for all OSDs.

   6. Upgrade RHCS to OSS Ceph Octopus -> Pacific -> Quincy -> Reef.

Does such a plan seem reasonable? Are there any major pitfalls of an approach
like this? Ideally, I would just rebuild an entire new cluster on Ceph Reef,
however there are obvious budgetary issues with such a plan.

My biggest concerns are with moving to a containerized installation, then
migrating from RHCS to OSS Ceph.

We engaged Red Hat support for the upgrade path and this is the top level steps we used:

"
1. Upgrade OS to 8.4 # Min os version required is 8.2EUS/8.4 [A] (Step1/step2 position can be interchanged) 2. Switch from BM to container with switch-from-non-containerized-to-containerized-ceph-daemons.yml playbook. 3. Upgrade RHCS 4 to RHCS 5 with rolling upgrade playbook using ceph-ansible. 4. Once upgraded, handover the ceph-ansible control to cephadm (new / moving forward tool for RHCS 5) with help of cephadm-adopt playbook.
"

Then:

Reinstalled all hosts with RHEL 9 one at a time, re-balancing in between and moving non-OSD services around with service specs (we got rid of our dedicated MONs/MDSs servers along the way).

Finally going upstream from RHCS 5 (17.2.6):

"
$ ceph config dump | grep registry
global basic container_image registry.redhat.io/rhceph/rhceph-5-rhel8@sha256:a193b0de114d19d2efd8750046b5d25da07e2c570e3c4eb4bd93e6de4b90a25a * mgr advanced mgr/cephadm/container_image_base registry.redhat.io/rhceph/rhceph-5-rhel8:latest
$ ceph config rm global container_image
$ ceph config rm mgr mgr/cephadm/container_image_base
$ ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.7
"

Mvh.

Torkil

Any advice or feedback is much appreciated.

Best,

Josh
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Torkil Svensgaard
Systems Administrator
Danish Research Centre for Magnetic Resonance DRCMR, Section 714
Copenhagen University Hospital Amager and Hvidovre
Kettegaard Allé 30, 2650 Hvidovre, Denmark
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux