Re: RBD bench read performance vs rados bench

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



rbd bench --io-type read 2tb/test --io-size 4M
bench  type read io_size 4194304 io_threads 16 bytes 1073741824 pattern 
sequential
  SEC       OPS   OPS/SEC   BYTES/SEC
    1         8     22.96  96306849.22
    2        12     11.74  49250368.05
    3        14      9.85  41294366.71
    4        20      8.99  37697084.35
    5        24      7.92  33218488.44
    6        29      4.18  17544268.15
    7        35      4.79  20108554.48
    8        38      4.82  20223375.77
    9        44      4.78  20028118.00
   10        50      4.98  20901154.36
   11        56      4.70  19714997.57
   12        59      4.96  20783869.38
   13        67      5.79  24280067.45
   14        78      6.71  28133962.20
   15        86      7.51  31512326.28
   16        98      9.92  41613289.49
   17       107      8.87  37189698.96
   18       113      9.25  38787843.71
   19       118      8.02  33630441.91
   20       127      8.08  33879605.71
   21       133      7.02  29448630.29
   22       139      6.80  28522695.29
   23       146      6.46  27102585.08
   24       150      6.50  27275014.72
   25       157      6.01  25205422.98
   26       164      5.73  24026089.08
   27       166      5.13  21526120.39
   28       173      5.18  21711129.16
   29       185      6.72  28192258.47
   30       191      6.92  29018511.32
   31       201      7.95  33342772.10
   32       207      8.76  36732760.58
   33       213      8.54  35823482.59
   34       218      6.89  28883406.39
   35       225      5.87  24627670.76
   36       226      5.03  21078626.70
   37       235      5.22  21894384.04
   38       237      4.12  17279968.87
   39       238      4.00  16760880.87
elapsed:    42  ops:      256  ops/sec:     6.09  bytes/sec: 25539951.50

Without RBD-cache performance is even worst by half.

So are only random reads distributed while sequential reads are sent to only 1 
OSD?


El martes, 15 de mayo de 2018 15:42:44 (CEST) usted escribió:
> On Tue, May 15, 2018 at 6:23 AM, Jorge Pinilla López <jorpilo@xxxxxxxxx> 
wrote:
> > rbd bench --io-type read 2tb/test --io-size 4M
> > bench  type read io_size 4194304 io_threads 16 bytes 1073741824 pattern
> > sequential
> > 
> >   SEC       OPS   OPS/SEC   BYTES/SEC
> >   
> >     1        23     36.13  151560621.45
> >     2        43     28.61  119988170.65
> >     3        54     23.02  96555723.10
> >     4        76     22.35  93748581.57
> >     5        86     20.31  85202745.83
> >     6       102     15.73  65987931.41
> >     7       113     13.72  57564529.85
> >     8       115     12.13  50895409.80
> >     9       138     12.62  52950797.01
> >    
> >    10       144     11.37  47688526.04
> >    11       154      9.59  40232628.73
> >    12       161      9.51  39882023.45
> >    13       167     10.30  43195718.39
> >    14       172      6.57  27570654.19
> >    15       181      7.21  30224357.89
> >    16       186      7.08  29692318.46
> >    17       192      6.31  26457629.12
> >    18       197      6.03  25286212.14
> >    19       202      6.22  26097739.41
> >    20       210      5.82  24406336.22
> >    21       217      6.05  25354976.24
> >    22       224      6.15  25785754.73
> >    23       231      6.84  28684892.86
> >    24       237      6.86  28760546.77
> > 
> > elapsed:    26  ops:      256  ops/sec:     9.58  bytes/sec: 40195235.45
> 
> What are your results if you re-run with the in-memory cache disabled
> (i.e. 'rbd bench --rbd-cache=false ....')?
> 
> > rados -p 2tb bench 10 seq
> > hints = 1
> > 
> >   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> >   lat(s)
> >   
> >     0       0         0         0         0         0           -         
> >      0
> >     1      16        58        42   167.965       168    0.164905    
> >     0.25338
> >     2      16        97        81   161.969       156   0.0317369   
> >     0.315989
> >     3      16       135       119    158.64       152    0.133847   
> >     0.349598
> >     4      16       180       164   163.975       180   0.0511805   
> >     0.354751
> >     5      16       229       213   170.375       196    0.245727   
> >     0.342972
> >     6      16       276       260   173.268       188    0.032029   
> >     0.344167
> >     7      16       326       310   177.082       200    0.489663   
> >     0.336684
> >     8      16       376       360   179.944       200   0.0458536   
> >     0.330955
> >     9      16       422       406   180.391       184    0.247551   
> >     0.336771
> >    
> >    10      16       472       456   182.349       200     1.28901   
> >    0.334343
> > 
> > Total time run:       10.522668
> > Total reads made:     473
> > Read size:            4194304
> > Object size:          4194304
> > Bandwidth (MB/sec):   179.802
> > Average IOPS:         44
> > Stddev IOPS:          4
> > Max IOPS:             50
> > Min IOPS:             38
> > Average Latency(s):   0.350895
> > Max latency(s):       1.61122
> > Min latency(s):       0.0317369
> > 
> > rados bench -p 2tb 10 rand
> > hints = 1
> > 
> >   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> >   lat(s)
> >   
> >     0       0         0         0         0         0           -         
> >      0
> >     1      15       127       112   447.903       448    0.109742   
> >     0.104891
> >     2      16       240       224   447.897       448    0.334096   
> >     0.131679
> >     3      15       327       312   415.918       352    0.109333   
> >     0.146387
> >     4      15       479       464   463.913       608    0.179371   
> >     0.133533
> >     5      15       640       625   499.912       644   0.0589528   
> >     0.124145
> >     6      15       808       793   528.576       672    0.148173   
> >     0.117483
> >     7      16       975       959   547.909       664   0.0119322   
> >     0.112975
> >     8      15      1129      1114    556.91       620     0.13646   
> >     0.111279
> >     9      15      1294      1279   568.353       660   0.0820129   
> >     0.109182
> >    
> >    10      15      1456      1441   576.307       648     0.11007   
> >    0.107887
> > 
> > Total time run:       10.106389
> > Total reads made:     1457
> > Read size:            4194304
> > Object size:          4194304
> > Bandwidth (MB/sec):   576.665
> > Average IOPS:         144
> > Stddev IOPS:          28
> > Max IOPS:             168
> > Min IOPS:             88
> > Average Latency(s):   0.108051
> > Max latency(s):       0.998451
> > Min latency(s):       0.00858933
> > 
> > 
> > Total time run:       3.478728
> > Total reads made:     582
> > Read size:            4194304
> > Object size:          4194304
> > Bandwidth (MB/sec):   669.21
> > Average IOPS:         167
> > Stddev IOPS:          6
> > Max IOPS:             176
> > Min IOPS:             163
> > Average Latency(s):   0.0919296
> > Max latency(s):       0.297326
> > Min latency(s):       0.0090395
> > 
> > Just to get in context, I have a 3 node cluster, replica 3 min size 2.
> > There are only 3 OSDs in the pool, one on each cluster (for benchmark)
> > All nodes are connected through 4x10Gbps (2 for public network and 2 for
> > private network)
> > There are no other clients running
> > Configuration is the default
> > Imagen is 20GB big, the disks are 2TB big, there are 125 PGs in the pool
> > 
> > 
> > I wonder why there is such a huge difference between RBD seq benchmark
> > with 4M io size and 16 threads and the Rados sequential Benchmark with
> > the same Object size, Rados benchmark makes a lot of sense when you can
> > read from multiple OSDs simultaneously but RBD read performance is really
> > bad.
> > 
> > On writes both rbd and rados have similar speed
> > 
> > Any advice?
> > 
> > Other question, why are random reads faster than sequential reads?
> 
> You are spreading out the work to multiple OSDs.
> 
> > Thanks a lot.
> > Jorge Pinilla López
> > 
> > 
> > 
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux