Re: Deadly slow Ceph cluster revisited

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Disclaimer: I'm relatively new to ceph, and haven't moved into
production with it.

Did you run your bench for 30 seconds?

For reference my bench from a VM bridged to a 10Gig card with 90x4TB
at 30 seconds is:

 Total time run:         30.766596
Total writes made:      1979
Write size:             4194304
Bandwidth (MB/sec):     257.292

Stddev Bandwidth:       106.78
Max bandwidth (MB/sec): 420
Min bandwidth (MB/sec): 0
Average Latency:        0.248238
Stddev Latency:         0.723444
Max latency:            10.5275
Min latency:            0.0346015

Seems like latency is a huge factor if your 30 second test took 52 seconds.

What kind of 10Gig NICs are you using? I have Mellanox Connectx-3 and
one node was using an older driver version. I started to experience
the osd in..out..in.. and "incorrectly marked out from..." as
mentioned by Quentin as well as poor performance. Installed the newest
version of the Mellanox driver and all is running well again.

On Fri, Jul 17, 2015 at 7:55 AM, J David <j.david.lists@xxxxxxxxx> wrote:
> On Fri, Jul 17, 2015 at 10:21 AM, Mark Nelson <mnelson@xxxxxxxxxx> wrote:
>> rados -p <pool> 30 bench write
>>
>> just to see how it handles 4MB object writes.
>
> Here's that, from the VM host:
>
>  Total time run:         52.062639
> Total writes made:      66
> Write size:             4194304
> Bandwidth (MB/sec):     5.071
>
> Stddev Bandwidth:       11.6312
> Max bandwidth (MB/sec): 80
> Min bandwidth (MB/sec): 0
> Average Latency:        12.436
> Stddev Latency:         13.6272
> Max latency:            51.6924
> Min latency:            0.073353
>
> Unfortunately I don't know much about how to parse this (other than
> 5MB/sec writes does match up with our best-case performance in the VM
> guest).
>
>> If rados bench is
>> also terribly slow, then you might want to start looking for evidence of IO
>> getting hung up on a specific disk or node.
>
> Thusfar, no evidence of that has presented itself.  iostat looks good
> on every drive and the nodes are all equally loaded.
>
> Thanks!
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux