Re: Feeding pool utilization data to time series for trending

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 20, 2016 at 4:19 AM, Shubhendu Tripathi <shtripat@xxxxxxxxxx> wrote:
> Hi Team,
>
> Our team is currently working on project named "tendrl" [1][2].
> Tendrl is a management platform for software defined storage system like
> Ceph, Gluster etc.
>
> As part of tendrl we are integrating with collectd to collect performance
> data and we maintain the time series data in graphite.
>
> I have a question at this juncture regarding pool utilization data.
> As our thought process goes, we think of using output from command "ceph df"
> and parse it to figure out pool utilization data and push it to graphite
> using collectd.

>From Kraken onwards it's simpler to write a ceph-mgr module that sends
the data straight to your time series store -- mgr plugins have access
to in-memory copies of this stuff without having to do any polling.

If you need to be backwards compatible with Jewel, you can do what the
existing stats collector does:
https://github.com/ceph/Diamond/blob/calamari/src/collectors/ceph/ceph.py

Note that the existing collector sends commands to the mons using
librados: no need to literally wrap the command line.

> The question here is what is/would be performance impact of running "ceph
> df" command on ceph nodes. We should be running this command only on mon
> nodes I feel.

The Ceph command line connects to mons over the network -- you can run
it from wherever you like.  However, you only actually need to run it
from one place: it's redundant to collect the same data from multiple
nodes.  The existing stats collector runs on all mons, but decides
whether to collect the cluster-wide data (such as free space) based on
whether its local mon is the leader or not (see
_collect_cluster_stats).

This problem goes away with ceph-mgr because it takes care of
instantiating your plugin in just one place.

> Wanted to verify with the team here if this thought process is in right
> direction and if so what ideally should be frequency of running the command
> "ceph df" from collectd.

No more frequently than the data is collected internally from OSDs
(osd_mon_report_interval_min, which is 5 seconds by default).

John

> This is just from our point of view and we are open to any other foolproof
> solution (if any).
>
> Kindly guide us.
>
> Regards,
> Shubhendu Tripathi
>
> [1] http://tendrl.org/
> [2] https://github.com/tendrl/
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux