On 06/01/2015 05:34 PM, Wang, Warren wrote:
Hi Mark, I don¹t suppose you logged latency during those tests, did you?
I¹m one of the folks, as Bryan mentioned, that advocates turning these
values down. I¹m okay with extending recovery time, especially when we are
talking about a default of 3x replication, with the trade off of better
client response.
Hi Warren,
I have the per-second rados bench latency data: Basically the last
latency and then the running average. There's also periodic updates
with max and avg latency. I don't have the rados bench summary
information though as when the cluster returns to a healthy state the
test gets killed.
I uploaded the cbt archive data from these tests here (~314MB archive):
http://nhm.ceph.com/backfill_tests.tgz
Basically it's a big nested directory structure with the rados bench
output, collectl data, etc. BurnupiX is the server and BrunupiY is the
client.
If you go into the collectl directories, you can use collectl to
playback cpu, disk, and other system metrics during the tests. For
instance, to see the disk data during 4MB write erasure coding tests
with "double" recovery settings as detailed in the document I linked
earlier:
[nhm@burnupiY data]$ cd backfill_test-4rados-wip-pq-20140319-ec62/JBOD/xfs/double/00000000/radosbench/osd_ra-00004096/op_size-04194304/concurrent_ops-00000032/write/collectl.burnupiX/
[nhm@burnupiY collectl.burnupiX]$ collectl -sD -oT -p burnupiX-20140327-103040.raw.gz | head -n 10
# DISK STATISTICS (/sec)
# <---------reads---------><---------writes---------><--------averages--------> Pct
#Time Name KBytes Merged IOs Size KBytes Merged IOs Size RWSize QLen Wait SvcTim Util
10:30:42 sda 0 0 0 0 0 0 0 0 0 0 0 0 0
10:30:42 sdd 152 0 19 8 48145 1 151 319 284 35 180 5 85
10:30:42 sdc 680 0 85 8 24002 1 95 253 137 4 9 4 81
10:30:42 sdg 672 0 84 8 18308 0 78 235 117 1 9 5 84
10:30:42 sde 404 0 51 8 26761 0 87 308 196 27 249 5 77
10:30:42 sdi 816 0 102 8 12269 0 52 236 84 1 8 4 64
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com