Re: Managing Multiple Ceph Clusters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A general feedback here on the list, as the form does not fully allow
for feedback:

It feels the direction of the "Multi cluster manager" is like a
ceph-ansible, ceph-deploy, cephadm pattern and it seems to be going into
the direction of "reinvent the wheel, but not as good as the wheel has been".

Instead of yet-another-tool, what we would really appreciate is better
integration with existing stacks:

- Have howtos / easy to use tools for prometheus integration
- Have standard alerts defined that "every cluster should have"
  (notifications != alerts)
- Staying with the prometheus example:
  - Best practices for setting labels, scraping, scraping intervals
  - Overview of ceph-native, cephadm and ceph-rook

Ceph is a great software suite, but the mistake of trying to reinvent
something better that is replacing industry standard solutions seems to
be at hand.

The only area where I see that a new "Multi cluster management tool"
could really help is by aiding in mirroring/replication across
clusters. Seeing what the status is of the replication, I would again
more see in something like prometheus.

Utilising multi tier clusters (c1: hdd, c2: ssd, c3: nvme) for smart
storage distribution would be another, very interesting topic to tackle
or to solve standard scenarios like "I have an active site A and B, I
want data to be synchronised to site C" are real world problems that
would benefit from a MCM.

Just my 5 Rappen and looking forward to seeing what is coming out of the
questionnaire.

Best regards,

Nico


Paul Cuzner <pcuzner@xxxxxxxxxx> writes:

> Hi,
>
> A few of the devs have been thinking about how we could make managing
> multiple ceph clusters easier. At this point we're trying to
> understand the requirements and problems that a multi-cluster feature
> needs to fix, and need your help!
>
> We've put together a short, 13 question survey;
> https://forms.gle/E9cAx4f51Hq2FHQXA
>
> Even if you don't currently run multiple clusters, we still value your opinion!
>
> And if you do have multiple-clusters...we'd really appreciate your insights!
>
> Cheers,
>
> Paul C


--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux