Thank you Paul. I'm not sure if these low values will be of any help:
osd commit_latency(ms) apply_latency(ms)
0 0 0
1 0 0
5 0 0
4 0 0
3 0 0
2 0 0
6 0 0
7 3 3
8 3 3
9 3 3
10 3 3
11 0 0
But still, there are some higher OSDs.
If i do some stresstest on a VM, the values increase heavily but Im unsure if this is not only a peak by the data distribution through crush-map and part of the game.
osd commit_latency(ms) apply_latency(ms)
0 8 8
1 18 18
5 0 0
4 0 0
3 0 0
2 7 7
6 0 0
7 100 100
8 44 44
9 199 199
10 512 512
11 15 15
osd commit_latency(ms) apply_latency(ms)
0 30 30
1 5 5
5 0 0
4 0 0
3 0 0
2 719 719
6 0 0
7 150 150
8 22 22
9 110 110
10 94 94
11 24 24
Stefan
Von: Paul Emmerich <paul.emmerich@xxxxxxxx>
You can have a look at subop_latency in "ceph daemon osd.XX perf
dump", it tells you how long an OSD took to reply to another OSD.
That's usually a good indicator if an OSD is dragging down others.
Or have a look at "ceph osd perf dump" which is basically disk
latency; simpler to acquire but with less information
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com