Re: ceph on kubernetes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Oğuz,

we have been supporting several rook/ceph clusters in the hyperscalers for years, including Azure.

A few quick notes:

* you can be prepared to run into some issues with the default config of the osds.

* in Azure, there is the issue with the quality of the network in some regions

* also this year azure has introduced a new pricing model for inter az communication.

* the Azure VMs with the respective disk classes will surprise you a bit in terms of backfilling, recovery, etc.

Best regards, Joachim


___________________________________
Clyso GmbH - Ceph Foundation Member

Am 05.10.22 um 13:44 schrieb Nico Schottelius:
Hey Oğuz,

the typical recommendations of native ceph still uphold in k8s,
additionally something you need to consider:

- Hyperconverged setup or dedicated nodes - what is your workload and
   budget
- Similar to native ceph, think about where you want to place data, this
   influences the selector inside rook of which devices / nodes to add
- Inside & Outside consumption: rook is very good with in-cluster
   configurations, creating PVCs/PVs, however you can also use rook
- mgr: usually we run 1+2 (standby) on native clusters, with k8s/rook it
   might be good enough to use 1 mgr, as k8s can take care of
   restarting/redeploying
- traffic separation: if that is a concern, you might want to go with
   multus in addition to your standard CNI
- Rook does not assign `resource` specs to OSD pods by default, if you
   hyperconverge you should be aware of that
- Always have the ceph-toolbox deployed - while you need it rarely, when
   you need it, you don't want to think about where to get the pod and
   how access it

Otherwise from our experience rook/ceph is probably the easiest in
regards to updates, easier than native handling and I suppose (*) easier
than cephadm as well.

Best regards,

Nico

(*) Can only judge from the mailing list comments, we cannot use cephadm
as our hosts are natively running Alpine Linux without systemd.

Oğuz Yarımtepe <oguzyarimtepe@xxxxxxxxx> writes:

Hi,

I am using Ceph on RKE2. Rook operator is installed on a rke2 cluster
running on Azure vms. I would like to learn whether there are best
practices for ceph on Kubernetes, like separating ceph nodes or pools or
using some custom settings for Kubernetes environment. Will be great if
anyone shares tips.

Regards.

--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux