Re: Deadly slow Ceph cluster revisited

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 17, 2015 at 10:21 AM, Mark Nelson <mnelson@xxxxxxxxxx> wrote:
> rados -p <pool> 30 bench write
>
> just to see how it handles 4MB object writes.

Here's that, from the VM host:

 Total time run:         52.062639
Total writes made:      66
Write size:             4194304
Bandwidth (MB/sec):     5.071

Stddev Bandwidth:       11.6312
Max bandwidth (MB/sec): 80
Min bandwidth (MB/sec): 0
Average Latency:        12.436
Stddev Latency:         13.6272
Max latency:            51.6924
Min latency:            0.073353

Unfortunately I don't know much about how to parse this (other than
5MB/sec writes does match up with our best-case performance in the VM
guest).

> If rados bench is
> also terribly slow, then you might want to start looking for evidence of IO
> getting hung up on a specific disk or node.

Thusfar, no evidence of that has presented itself.  iostat looks good
on every drive and the nodes are all equally loaded.

Thanks!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux