rados benchmark question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

 

I run 3 nodes connected with a 10Gbit network, each running 2 OSDs.

 

Disks are 4TB Seagate Constellation ST4000NM0033-9ZM (xfs, journal on same disk).

 

# ceph tell osd.0 bench

{ "bytes_written": 1073741824,

  "blocksize": 4194304,

  "bytes_per_sec": "56494242.000000"}

 

So a single OSD can write up to OSD_BW=55MB/s

 

I have 6 OSDs, so I would expect an overall write bandwidth of:

 

6*OSD_BW = 6*55 = 330MB/s

 

To verify that, I created a pool with size=1, and run:

 

# rados -p testpool bench 300 write --no-cleanup

Total time run:         300.905982

Total writes made:      13487

Write size:             4194304

Bandwidth (MB/sec):     179.285

 

Stddev Bandwidth:       106.954

Max bandwidth (MB/sec): 400

Min bandwidth (MB/sec): 0

Average Latency:        0.356925

Stddev Latency:         0.696282

Max latency:            8.5891

Min latency:            0.025648

 

This is much lower than the expected bandwidth (179 < 330).

 

Is this normal? If so, what is the reason for that?

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux