Re: Rook on bare-metal?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here are the answers to some of the questions. Happy to follow up with more
discussion in the Rook Slack <https://slack.rook.io/>, Discussions
<https://github.com/rook/rook/discussions>, or Issues
<https://github.com/rook/rook/issues>.

Thanks!
Travis

On Thu, Jul 6, 2023 at 4:43 AM Anthony D'Atri <aad@xxxxxxxxxxxxxx> wrote:

> I’m also using Rook on BM.  I had never used K8s before, so that was the
> learning curve, e.g. translating the example YAML files into the Helm
> charts we needed, and the label / taint / toleration dance to fit the
> square peg of pinning services to round hole nodes.  We’re using Kubespray
> ; I gather there are other ways of deploying K8s?
>
> Some things that could improve:
>
> * mgrs are limited to 2, apparently Sage previously said that was all
> anyone should need.  I would like to be able to deploy one for each mon.


Is there a specific need for 3? Or is it more of a habit/expectation?


> * The efficiency of `destroy`ing OSDs is not exploited, so replacing one
> involves more data shuffling than it otherwise might
>

There is a related design discussion in progress that will address the
replacement of OSDs to avoid the data reshuffling:
https://github.com/rook/rook/pull/12381


> * I’m specifying 3 RGWs but only getting 1 deployed, no idea why

* Ingress / load balancer service for multiple RGWs seems to be manual
> * Bundled alerts are kind of noisy
>

Curious for more details on these three issues if you want to open issues.


> * I’m still unsure what Rook does dynamically, and what it only does at
> deployment time (we use ArgoCD).  I.e., if I make changes, what sticks and
> what’s trampled?
>

Changes are intended to be updated if you change the settings in the CRDs.
If you see settings that are not applied when changed, agreed we should
track that and fix it, or at least document it.


> * How / if one can bake configuration (as in `ceph.conf` entries) into the
> YAML files vs manually running “ceph config”
>

ceph.conf settings can be applied through a configmap. See
https://rook.io/docs/rook/latest/Storage-Configuration/Advanced/ceph-configuration/#custom-csi-cephconf-settings


> * What the sidecars within the pods are doing, if any of them can be
> disabled
>

Sidecars are needed for some of the pods (csi drivers and mgr) to provide
some functionality. They can't be disabled unless some feature is disabled.
For example, if two mgrs are running, the mgr sidecar is needed to watch
when the mgr failover occurs so the services can update to point to the
active mgr. Search this doc for "sidecar" for some more details on the mgr
sidecar.
https://rook.io/docs/rook/latest/CRDs/Cluster/ceph-cluster-crd/#cluster-wide-resources-configuration-settings


> * Requests / limits for various pods, especially when on dedicated nodes.
> Plan to experiment with disabling limits and setting
> `autotune_memory_target_ratio` and `osd_memory_target_autotune`
>

Where you have dedicated nodes, it can certainly be simpler to remove the
resource requests/limits, as long as you set those memory limits. Default
requests/limits are set by the helm chart, and they can admittedly be
challenging to tune since there are so many moving parts.


> * Documentation for how to do pod-specific configuration, i.e. setting the
> number of OSDs per node when it isn’t uniform.  A colleague helped me sort
> this out, but I’m enumerating each node - would like to be able to do so
> more concisely, perhaps with a default and overrides.
>

There are multiple ways to deal with OSD creation, depending on the
environment. Curious to follow up on what worked for you, or how this could
be improved in the docs.


>
> > On Jul 6, 2023, at 4:13 AM, Joachim Kraftmayer - ceph ambassador <
> joachim.kraftmayer@xxxxxxxxx> wrote:
> >
> > Hello
> >
> > we have been following rook since 2018 and have had our experiences both
> on bare-metal and in the hyperscalers.
> > In the same way, we have been following cephadm from the beginning.
> >
> > Meanwhile, we have been using both in production for years and the
> decision which orchestrator to use depends from project to project. e.g.,
> the features of both projects are not identical.
> >
> > Joachim
> >
> > ___________________________________
> > ceph ambassador DACH
> > ceph consultant since 2012
> >
> > Clyso GmbH - Premier Ceph Foundation Member
> >
> > https://www.clyso.com/
> >
> > Am 06.07.23 um 07:16 schrieb Nico Schottelius:
> >> Morning,
> >>
> >> we are running some ceph clusters with rook on bare metal and can very
> >> much recomend it. You should have proper k8s knowledge, knowing how to
> >> change objects such as configmaps or deployments, in case things go
> >> wrong.
> >>
> >> In regards to stability, the rook operator is written rather defensive,
> >> not changing monitors or the cluster if the quorom is not met and
> >> checking how the osd status is on removal/adding of osds.
> >>
> >> So TL;DR: very much usable and rather k8s native.
> >>
> >> BR,
> >>
> >> Nico
> >>
> >> zssas@xxxxxxx writes:
> >>
> >>> Hello!
> >>>
> >>> I am looking to simplify ceph management on bare-metal by deploying
> >>> Rook onto kubernetes that has been deployed on bare metal (rke). I
> >>> have used rook in a cloud environment but I have not used it on
> >>> bare-metal. I am wondering if anyone here runs rook in bare-metal?
> >>> Would you recommend it to cephadm or would you steer clear of it?
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users@xxxxxxx
> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >> --
> >> Sustainable and modern Infrastructures by ungleich.ch
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux