Re: Monitoring Overhead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ashley,

feel free to use/fork/copy my ceph_watch project
https://github.com/tomkukral/ceph_watch

It wraps stdout of `ceph -w` and exports these information to Prometheus
node_exporter.

Regards,
Tom


On 10-24 03:10, Ashley Merrick wrote:
Hello,

This may come across as a simple question but just wanted to check.

I am looking at importing live data from my cluster via ceph -s e.t.c into a graphical graph interface so I can monitor performance / iops / e.t.c overtime.

I am looking to pull this data from one or more monitor nodes, when the data is retrieved for the ceph -s output is this information that the monitor already has locally or is there an overhead that is applied to the whole cluster to retrieve this data every time the command is executed?

Reason I ask is I want to make sure I am not applying unnecessary overhead and load onto all OSD node's to retrieve this data at a near live view, I fully understand it will apply a small amount of load / CPU on the local MON to process the command, I am more interesting in overall cluster.

Thanks,
Ashley

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux