Re: RBD vs RADOS benchmark performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/10/2013 07:20 PM, Greg wrote:
Le 11/05/2013 00:56, Mark Nelson a écrit :
On 05/10/2013 12:16 PM, Greg wrote:
Hello folks,

I'm in the process of testing CEPH and RBD, I have set up a small
cluster of  hosts running each a MON and an OSD with both journal and
data on the same SSD (ok this is stupid but this is simple to verify the
disks are not the bottleneck for 1 client). All nodes are connected on a
1Gb network (no dedicated network for OSDs, shame on me :).

Summary : the RBD performance is poor compared to benchmark

A 5 seconds seq read benchmark shows something like this :
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg
lat
     0       0         0         0         0 0 -         0
     1      16        39        23   91.9586        92 0.966117
0.431249
     2      16        64        48   95.9602       100 0.513435 0.53849
     3      16        90        74   98.6317       104 0.25631 0.55494
     4      11        95        84   83.9735        40 1.80038 0.58712
 Total time run:        4.165747
Total reads made:     95
Read size:            4194304
Bandwidth (MB/sec):    91.220

Average Latency:       0.678901
Max latency:           1.80038
Min latency:           0.104719

91MB read performance, quite good !

Now the RBD performance :
root@client:~# dd if=/dev/rbd1 of=/dev/null bs=4M count=100
100+0 records in
100+0 records out
419430400 bytes (419 MB) copied, 13.0568 s, 32.1 MB/s

There is a 3x performance factor (same for write: ~60M benchmark, ~20M
dd on block device)

The network is ok, the CPU is also ok on all OSDs.
CEPH is Bobtail 0.56.4, linux is 3.8.1 arm (vanilla release + some
patches for the SoC being used)

Can you show me the starting point for digging into this ?

Hi Greg, First things first, are you doing kernel rbd or qemu/kvm?  If
you are doing qemu/kvm, make sure you are using virtio disks.  This
can have a pretty big performance impact. Next, are you using RBD
cache? With 0.56.4 there are some performance issues with large
sequential writes if cache is on, but it does provide benefit for
small sequential writes.  In general RBD cache behaviour has improved
with Cuttlefish.

Beyond that, are the pools being targeted by RBD and rados bench setup
the same way?  Same number of Pgs?  Same replication?
Mark, thanks for your prompt reply.

I'm doing kernel RBD and so, I have not enabled the cache (default
setting?)
Sorry, I forgot to mention the pool used for bench and RBD is the same.

Interesting. Does your rados bench performance change if you run a longer test? So far I've been seeing about a 20-30% performance overhead for kernel RBD, but 3x is excessive! It might be worth watching the underlying IO sizes to the OSDs in each case with something like "collectl -sD -oT" to see if there's any significant differences.


Regards,

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux