Re: Opensource plugin for pulling out cluster recovery and client IO metric

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----------------------------------------
> Date: Fri, 28 Aug 2015 12:07:39 +0100
> From: gfarnum@xxxxxxxxxx
> To: vickey.singh22693@xxxxxxxxx
> CC: ceph-users@xxxxxxxxxxxxxx; ceph-users@xxxxxxxx; ceph-devel@xxxxxxxxxxxxxxx
> Subject: Re:  Opensource plugin for pulling out cluster recovery and client IO metric
>
> On Mon, Aug 24, 2015 at 4:03 PM, Vickey Singh
> <vickey.singh22693@xxxxxxxxx> wrote:
>> Hello Ceph Geeks
>>
>> I am planning to develop a python plugin that pulls out cluster recovery IO
>> and client IO operation metrics , that can be further used with collectd.
>>
>> For example , i need to take out these values
>>
>> recovery io 814 MB/s, 101 objects/s
>> client io 85475 kB/s rd, 1430 kB/s wr, 32 op/s
The calculation *window* for those stats are very small, IIRC, they are two PG version which most likely map to two seconds (average of the last two seconds), you may increase mon_stat_smooth_intervals to enlarge the window, but I didn't try it myself.

I found the 'ceph status -f json' has better formatted output and more information.
>>
>>
>> Could you please help me in understanding how ceph -s and ceph -w outputs
>> prints cluster recovery IO and client IO information.
>> Where this information is coming from. Is it coming from perf dump ? If yes
>> then which section of perf dump output is should focus on. If not then how
>> can i get this values.
>>
>> I tried ceph --admin-daemon /var/run/ceph/ceph-osd.48.asok perf dump , but
>> it generates hell lot of information and i am confused which section of
>> output should i use.
perf counters have a tone of information which needs time to understand the details, but if the purpose is just to dump as what they are and do better aggregation/reporting, you can check 'perf schema' first to get the type of the field, can cross check the perf_counter's definition for each type, to determine how you collection/aggregate those data.
>
> This information is generated only on the monitors based on pg stats
> from the OSDs, is slightly laggy, and can be most easily accessed by
> calling "ceph -s" on a regular basis. You can get it with json output
> that is easier to parse, and you can optionally set up an API server
> for more programmatic access. I'm not sure on the details of doing
> that last, though.
> -Greg
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 		 	   		  
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux