Re: Ceph monitoring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I just created bash scripts (and a litte python script) to send data to a graphite backend through collectd.
It was inspired by https://github.com/rochaporto/collectd-ceph. I re-wrote script because of python version incompatibly on my platform.

They are not "production-ready" scripts so feel free to mail me if you want more information.

Regards

Chris

-----Message d'origine-----
De : ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] De la part de Lenz Grimmer
Envoyé : mercredi 1 février 2017 11:23
À : ceph-users@xxxxxxxxxxxxxx
Objet : Re:  Ceph monitoring

Hi,

On 01/30/2017 12:18 PM, Matthew Vernon wrote:

> On 28/01/17 23:43, Marc Roos wrote:
> 
>> Is there a doc that describes all the parameters that are published 
>> by collectd-ceph?
> 
> The best I've found is the Redhat documentation of the performance 
> counters (which are what collectd-ceph is querying):
> 
> https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/pa
> ged/administration-guide/chapter-9-performance-counters

First off, which collectd-ceph plugin are we talking about here?

There seem to be several different implementations:

https://github.com/ceph/collectd (full collectd fork, outdated?)
https://github.com/ceph/collectd-4.10.1 (dito, outdated?) https://github.com/Crapworks/collectd-ceph (uses "perf dump", but might be outdated, too) https://github.com/inkscope/collectd-ceph (which is a fork of a fork of
https://github.com/rochaporto/collectd-ceph)

According to the documentation mentioned above, these performance metrics can only be obtained via the "ceph daemon <node> perf schema"
command on the respective node.

The inkscope/collectd-ceph plugin (and its ancestors) seem to be designed to be installed on any node with librados access to the cluster and use the usual commands like "ceph osd dump" or "ceph osd pool stats"
to gather information about the cluster.

However, this does seems to provide less datails than what could be obtained via the "perf schema" statements which are utilized by the Crapworks/collectd plugin and the plugin included in the ceph/collectd fork.

This is a tad bit messy. IMHO, it would be nice if there was one set of collectd plugins for Ceph that would both support collecting cluster stats via librados from any node, as well as a plugin that could be deployed on the Ceph nodes directly, to obtain additional information that can only be queried locally.

Lenz

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux