Hi, On 01/30/2017 12:18 PM, Matthew Vernon wrote: > On 28/01/17 23:43, Marc Roos wrote: > >> Is there a doc that describes all the parameters that are published by >> collectd-ceph? > > The best I've found is the Redhat documentation of the performance > counters (which are what collectd-ceph is querying): > > https://access.redhat.com/documentation/en/red-hat-ceph-storage/1.3/paged/administration-guide/chapter-9-performance-counters First off, which collectd-ceph plugin are we talking about here? There seem to be several different implementations: https://github.com/ceph/collectd (full collectd fork, outdated?) https://github.com/ceph/collectd-4.10.1 (dito, outdated?) https://github.com/Crapworks/collectd-ceph (uses "perf dump", but might be outdated, too) https://github.com/inkscope/collectd-ceph (which is a fork of a fork of https://github.com/rochaporto/collectd-ceph) According to the documentation mentioned above, these performance metrics can only be obtained via the "ceph daemon <node> perf schema" command on the respective node. The inkscope/collectd-ceph plugin (and its ancestors) seem to be designed to be installed on any node with librados access to the cluster and use the usual commands like "ceph osd dump" or "ceph osd pool stats" to gather information about the cluster. However, this does seems to provide less datails than what could be obtained via the "perf schema" statements which are utilized by the Crapworks/collectd plugin and the plugin included in the ceph/collectd fork. This is a tad bit messy. IMHO, it would be nice if there was one set of collectd plugins for Ceph that would both support collecting cluster stats via librados from any node, as well as a plugin that could be deployed on the Ceph nodes directly, to obtain additional information that can only be queried locally. Lenz
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com