Last monday I performed a quick test with those two disks already,
probably not that relevant, but posting it anyway:
I created a two-disk ceph 'cluster' on just the one local node, and ran
the following:
root@ceph:~# rados bench -p scbench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_ceph_48906
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 107 91 363.961 364 0.0546517 0.155121
2 16 206 190 379.948 396 0.1529 0.159227
3 16 324 308 410.614 472 0.0972163 0.151421
4 16 458 442 441.942 536 0.0484349 0.141799
5 16 590 574 459.141 528 0.0445051 0.136922
6 16 727 711 473.941 548 0.181066 0.134468
7 16 856 840 479.941 516 0.187683 0.133199
8 16 970 954 476.942 456 0.070753 0.132642
9 16 1089 1073 476.831 476 0.193608 0.133754
10 16 1214 1198 479.142 500 0.0999212 0.132529
Total time run: 10.097218
Total writes made: 1215
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 481.321
Stddev Bandwidth: 60.481
Max bandwidth (MB/sec): 548
Min bandwidth (MB/sec): 364
Average IOPS: 120
Stddev IOPS: 15
Max IOPS: 137
Min IOPS: 91
Average Latency(s): 0.132889
Stddev Latency(s): 0.0645579
Max latency(s): 0.336118
Min latency(s): 0.0117049
Do let me know what else you'd want me to do.
MJ
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx