Re: Requesting recommendations for Ceph multi-cluster management

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

This was roughly what we were planning with the terraform ceph
provider but we lost momentum on that activity and it's quite
incomplete: https://github.com/cernops/terraform-provider-ceph
I don't know of another multi-cluster manager, but would be interested
to watch this thread.

Cheers, Dan

On Wed, Nov 30, 2022 at 2:21 PM Thomas Eckert <thomas.eckert1@xxxxxxxx> wrote:
>
> Hi folks,
>
>
> I've posted pretty much the same question to ceph-users@xxxxxxx on November 23rd [2] but got no response there. Seeing how dev@xxxxxxx was (literally) mentioned on your corresponding survey [1], I figured I'll try dev@ next.
>
>
> I'm looking for guidance/recommendations on how to approach below topic. As I'm
> fairly new to Ceph as a whole, I might be using or looking for terms/solutions incorrectly
> or simply missing some obvious puzzle pieces. Please do not assume advanced Ceph knowledge
> on my part (-:
>
> We are looking at building up multiple Ceph clusters, each possibly consisting of multiple
> different pools / different configurations (repair prio, etc.). This is not about
> multi-site clusters but about multiple individual clusters which have no direct knowledge
> about each other what-so-ever.
>
> Albeit still being in an investigate/research project phase, we are realizing already that
> we will (and already do) need a system to maintain our clusters' high-level
> information such as which nodes are (currently) associated with which cluster and general
> meta-information for each cluster like stage (live/qa/dev), name/id, Ceph version, etc.
>
> Seeing how we will want to connect to this service from multiple other systems
> (Puppet/Ansible/etc) we are looking for a service with a sensible API.
>
> As any such undertaking is prone to have, there are plenty of additional requirements we
> have in mind such as, full encryption (in-transport, at-rest), exchangeable storage layer
> (not hardwired to one DB/etc), versioned data storage (so we can query "the
> past" and not just the current state) and (a possibly fine-grained) access permission
> system. The entire list is quite lengthy and it probably won't help to list out each
> and every item here. Suffice it to say we are looking for a "holistic multi-cluster
> management" solution.
>
> One important note is that we need to be able to run it ourselves, "as a
> service" offerings are not an option for us. I suppose we are looking for an OSS
> project, though it might also be several ones pieced together.
>
> One particularly noteworthy find while searching for Ceph multi-cluster management was [1]. Unfortunately, I could not find any derivations or similar following this survey and, at
> time of writing, it is the only article on ceph.io labeled "multi-cluster".
>
> Any recommendations or pointers where to look would be appreciated!
>
> Regards,
>   Thomas
>
> [1] https://ceph.io/en/news/blog/2022/multi-cluster-mgmt-survey/
> [2] https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/IB25BO55LUX5ETB2BGDN3CFOKHFWJN66/
>
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx
> To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux