Or separate the journals as this will bring the workload down on the spinners to 3Xrather than 6X From: Marek Dohojda [mailto:mdohojda@xxxxxxxxxxxxxxxxxxx]
Crad I think you are 100% correct: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util 0.00 369.00 33.00 1405.00 132.00 135656.00 188.86 5.61 4.02 21.94 3.60 0.70 100.00 I was kinda wondering that this maybe the case, which is why I was wondering if I should be doing too much in terms of troubleshooting. So basically what you are saying I need to wait for new version? Thank you very much everybody! On Tue, Nov 24, 2015 at 9:35 AM, Nick Fisk <nick@xxxxxxxxxx> wrote: You haven’t stated what size replication you are running. Keep in mind that with a replication
factor of 3, you will be writing 6x the amount of data down to disks than what the benchmark says (3x replication x2 for data+journal write).
You might actually be near the hardware maximums. What does iostat looks like whilst
you are running rados bench, are the disks getting maxed out? From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Marek Dohojda
7 total servers, 20 GIG pipe between servers, both reads and writes. The network itself has plenty of pipe left, it is averaging 40Mbits/s Rados Bench SAS 30 writes Total time run: 30.591927 Total writes made: 386 Write size: 4194304 Bandwidth (MB/sec): 50.471 Stddev Bandwidth: 48.1052 Max bandwidth (MB/sec): 160 Min bandwidth (MB/sec): 0 Average Latency: 1.25908 Stddev Latency: 2.62018 Max latency: 21.2809 Min latency: 0.029227 Rados Bench SSD writes Total time run: 20.425192 Total writes made: 1405 Write size: 4194304 Bandwidth (MB/sec): 275.150 Stddev Bandwidth: 122.565 Max bandwidth (MB/sec): 576 Min bandwidth (MB/sec): 0 Average Latency: 0.231803 Stddev Latency: 0.190978 Max latency: 0.981022 Min latency: 0.0265421 As you can see SSD is better but not as much as I would expect SSD to be. On Tue, Nov 24, 2015 at 9:10 AM, Alan Johnson <alanj@xxxxxxxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com