Re: Rook on bare-metal?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I’m also using Rook on BM.  I had never used K8s before, so that was the learning curve, e.g. translating the example YAML files into the Helm charts we needed, and the label / taint / toleration dance to fit the square peg of pinning services to round hole nodes.  We’re using Kubespray ; I gather there are other ways of deploying K8s?

Some things that could improve:

* mgrs are limited to 2, apparently Sage previously said that was all anyone should need.  I would like to be able to deploy one for each mon.
* The efficiency of `destroy`ing OSDs is not exploited, so replacing one involves more data shuffling than it otherwise might
* I’m specifying 3 RGWs but only getting 1 deployed, no idea why
* Ingress / load balancer service for multiple RGWs seems to be manual
* Bundled alerts are kind of noisy
* I’m still unsure what Rook does dynamically, and what it only does at deployment time (we use ArgoCD).  I.e., if I make changes, what sticks and what’s trampled?
* How / if one can bake configuration (as in `ceph.conf` entries) into the YAML files vs manually running “ceph config”
* What the sidecars within the pods are doing, if any of them can be disabled
* Requests / limits for various pods, especially when on dedicated nodes.  Plan to experiment with disabling limits and setting `autotune_memory_target_ratio` and `osd_memory_target_autotune`
* Documentation for how to do pod-specific configuration, i.e. setting the number of OSDs per node when it isn’t uniform.  A colleague helped me sort this out, but I’m enumerating each node - would like to be able to do so more concisely, perhaps with a default and overrides.

> On Jul 6, 2023, at 4:13 AM, Joachim Kraftmayer - ceph ambassador <joachim.kraftmayer@xxxxxxxxx> wrote:
> 
> Hello
> 
> we have been following rook since 2018 and have had our experiences both on bare-metal and in the hyperscalers.
> In the same way, we have been following cephadm from the beginning.
> 
> Meanwhile, we have been using both in production for years and the decision which orchestrator to use depends from project to project. e.g., the features of both projects are not identical.
> 
> Joachim
> 
> ___________________________________
> ceph ambassador DACH
> ceph consultant since 2012
> 
> Clyso GmbH - Premier Ceph Foundation Member
> 
> https://www.clyso.com/
> 
> Am 06.07.23 um 07:16 schrieb Nico Schottelius:
>> Morning,
>> 
>> we are running some ceph clusters with rook on bare metal and can very
>> much recomend it. You should have proper k8s knowledge, knowing how to
>> change objects such as configmaps or deployments, in case things go
>> wrong.
>> 
>> In regards to stability, the rook operator is written rather defensive,
>> not changing monitors or the cluster if the quorom is not met and
>> checking how the osd status is on removal/adding of osds.
>> 
>> So TL;DR: very much usable and rather k8s native.
>> 
>> BR,
>> 
>> Nico
>> 
>> zssas@xxxxxxx writes:
>> 
>>> Hello!
>>> 
>>> I am looking to simplify ceph management on bare-metal by deploying
>>> Rook onto kubernetes that has been deployed on bare metal (rke). I
>>> have used rook in a cloud environment but I have not used it on
>>> bare-metal. I am wondering if anyone here runs rook in bare-metal?
>>> Would you recommend it to cephadm or would you steer clear of it?
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> 
>> --
>> Sustainable and modern Infrastructures by ungleich.ch
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux