You haven’t stated what size replication you are running. Keep in mind that with a replication factor of 3, you will be writing 6x the amount of data down to disks than what the benchmark says (3x replication x2 for data+journal write). You might actually be near the hardware maximums. What does iostat looks like whilst you are running rados bench, are the disks getting maxed out? From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Marek Dohojda 7 total servers, 20 GIG pipe between servers, both reads and writes. The network itself has plenty of pipe left, it is averaging 40Mbits/s Rados Bench SAS 30 writes Total time run: 30.591927 Total writes made: 386 Write size: 4194304 Bandwidth (MB/sec): 50.471 Stddev Bandwidth: 48.1052 Max bandwidth (MB/sec): 160 Min bandwidth (MB/sec): 0 Average Latency: 1.25908 Stddev Latency: 2.62018 Max latency: 21.2809 Min latency: 0.029227 Rados Bench SSD writes Total time run: 20.425192 Total writes made: 1405 Write size: 4194304 Bandwidth (MB/sec): 275.150 Stddev Bandwidth: 122.565 Max bandwidth (MB/sec): 576 Min bandwidth (MB/sec): 0 Average Latency: 0.231803 Stddev Latency: 0.190978 Max latency: 0.981022 Min latency: 0.0265421 As you can see SSD is better but not as much as I would expect SSD to be. On Tue, Nov 24, 2015 at 9:10 AM, Alan Johnson <alanj@xxxxxxxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com