RBD bench read performance vs rados bench

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



rbd bench --io-type read 2tb/test --io-size 4M        
bench  type read io_size 4194304 io_threads 16 bytes 1073741824 pattern 
sequential
  SEC       OPS   OPS/SEC   BYTES/SEC
    1        23     36.13  151560621.45
    2        43     28.61  119988170.65
    3        54     23.02  96555723.10
    4        76     22.35  93748581.57
    5        86     20.31  85202745.83
    6       102     15.73  65987931.41
    7       113     13.72  57564529.85
    8       115     12.13  50895409.80
    9       138     12.62  52950797.01
   10       144     11.37  47688526.04
   11       154      9.59  40232628.73
   12       161      9.51  39882023.45
   13       167     10.30  43195718.39
   14       172      6.57  27570654.19
   15       181      7.21  30224357.89
   16       186      7.08  29692318.46
   17       192      6.31  26457629.12
   18       197      6.03  25286212.14
   19       202      6.22  26097739.41
   20       210      5.82  24406336.22
   21       217      6.05  25354976.24
   22       224      6.15  25785754.73
   23       231      6.84  28684892.86
   24       237      6.86  28760546.77
elapsed:    26  ops:      256  ops/sec:     9.58  bytes/sec: 40195235.45


rados -p 2tb bench 10 seq
hints = 1
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      16        58        42   167.965       168    0.164905     0.25338
    2      16        97        81   161.969       156   0.0317369    0.315989
    3      16       135       119    158.64       152    0.133847    0.349598
    4      16       180       164   163.975       180   0.0511805    0.354751
    5      16       229       213   170.375       196    0.245727    0.342972
    6      16       276       260   173.268       188    0.032029    0.344167
    7      16       326       310   177.082       200    0.489663    0.336684
    8      16       376       360   179.944       200   0.0458536    0.330955
    9      16       422       406   180.391       184    0.247551    0.336771
   10      16       472       456   182.349       200     1.28901    0.334343
Total time run:       10.522668
Total reads made:     473
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   179.802
Average IOPS:         44
Stddev IOPS:          4
Max IOPS:             50
Min IOPS:             38
Average Latency(s):   0.350895
Max latency(s):       1.61122
Min latency(s):       0.0317369

rados bench -p 2tb 10 rand
hints = 1
  sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg lat(s)
    0       0         0         0         0         0           -           0
    1      15       127       112   447.903       448    0.109742    0.104891
    2      16       240       224   447.897       448    0.334096    0.131679
    3      15       327       312   415.918       352    0.109333    0.146387
    4      15       479       464   463.913       608    0.179371    0.133533
    5      15       640       625   499.912       644   0.0589528    0.124145
    6      15       808       793   528.576       672    0.148173    0.117483
    7      16       975       959   547.909       664   0.0119322    0.112975
    8      15      1129      1114    556.91       620     0.13646    0.111279
    9      15      1294      1279   568.353       660   0.0820129    0.109182
   10      15      1456      1441   576.307       648     0.11007    0.107887
Total time run:       10.106389
Total reads made:     1457
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   576.665
Average IOPS:         144
Stddev IOPS:          28
Max IOPS:             168
Min IOPS:             88
Average Latency(s):   0.108051
Max latency(s):       0.998451
Min latency(s):       0.00858933


Total time run:       3.478728
Total reads made:     582
Read size:            4194304
Object size:          4194304
Bandwidth (MB/sec):   669.21
Average IOPS:         167
Stddev IOPS:          6
Max IOPS:             176
Min IOPS:             163
Average Latency(s):   0.0919296
Max latency(s):       0.297326
Min latency(s):       0.0090395

Just to get in context, I have a 3 node cluster, replica 3 min size 2.
There are only 3 OSDs in the pool, one on each cluster (for benchmark) 
All nodes are connected through 4x10Gbps (2 for public network and 2 for 
private network)
There are no other clients running
Configuration is the default
Imagen is 20GB big, the disks are 2TB big, there are 125 PGs in the pool


I wonder why there is such a huge difference between RBD seq benchmark with 4M 
io size and 16 threads and the Rados sequential Benchmark with the same Object 
size, Rados benchmark makes a lot of sense when you can read from multiple 
OSDs simultaneously but RBD read performance is really bad.

On writes both rbd and rados have similar speed

Any advice?

Other question, why are random reads faster than sequential reads?

Thanks a lot.
Jorge Pinilla López



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux