Hello Paul,
thanks for your response/hints.
I discovered the following tool in the ceph source repository:
https://github.com/ceph/ceph/blob/master/src/tools/histogram_dump.py
The tool provides output based on the statistics mention by you:
# ceph daemon osd.24 perf histogram dump|grep -P
"op_.*_latency"
"op_r_latency_out_bytes_histogram": {
"op_w_latency_in_bytes_histogram": {
"op_rw_latency_in_bytes_histogram": {
"op_rw_latency_out_bytes_histogram": {
cd /tmp
wget
https://raw.githubusercontent.com/ceph/ceph/master/src/tools/histogram_dump.py
chmod +x histogram_dump.py
Request size (bytes):
0 512 1k 2k 4k 8k 16k 32k 65k 131k
262k 524k 1M 2M 4M 8M 16M 33M 67M 134M 268M 536M
1G 2G 4G 8G 17G 34G 68G 137G 274G
-1 511 1k 2k 4k 8k 16k 32k 65k 131k 262k
524k 1M 2M 4M 8M 16M 33M 67M 134M 268M 536M 1G
2G 4G 8G 17G 34G 68G 137G 274G
Latency (usec):
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 : -1
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 : 99k
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 100k : 199k
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 200k : 399k
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 400k : 799k
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 800k : 1M
0 0 0 0 0 4
1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 1M : 3M
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 3M : 6M
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 6M : 12M
0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 12M : 25M
0 0 0 0 0 5 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 25M : 51M
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 51M : 102M
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 102M : 204M
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 204M : 409M
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 409M : 819M
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 819M : 1G
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1G : 3G
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 3G : 6G
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 6G : 13G
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 13G : 26G
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 26G : 52G
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 52G : 104G
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 104G : 209G
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 209G : 419G
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 419G : 838G
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 838G : 1T
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1T : 3T
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 3T : 6T
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 6T : 13T
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 13T : 26T
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 26T : 53T
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 53T :
Probably this script is a good basis for writing my own
specialized tool ....
This might be a good option for detailed analysis.
In a first step i just would like to have two simple KPIs which
describe a average/aggregated write/read latency of these
statistics.
Are there tools/other functionalities which provide this in a
simple way?
Regards
Marc
Am 11.07.2018 um 18:42 schrieb Paul
Emmerich:
Hi,
from experience: commit/apply_latency are not good metrics,
the only good thing about them is that they are really easy to
track.
But we have found them to be almost completely useless in
the real world.
We track the op_*_latency metrics from perf dump and found
them to be very helpful, they are more annoying to track due
to their
format. The median OSD is a good indicator and so is the
slowest OSD.
Paul
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com