Re: struggling to achieve high bandwidth on Ceph dev cluster - HELP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



@Marc: thanks a lot.. your results have been helpful to understand.

@Mark: mainly HDDs.....not even one SSD.....so yes, pretty slow.

On Wed, Feb 10, 2021 at 9:22 PM Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:

> > Some more questions please:
> > How many OSDs have you been using in your second email tests for 1gbit
> > [1]
> > and 10gbit [2] ethernet? Or to be precise, what is your cluster for
>
> When I was testing with 1gbit ethernet I had 11 osds on 4 servers, but
> this already showed saturated 1Gbit links. Now on the 10gbit ethernet DAC
> it is with 30 hdd's or so. Keep in mind that the default rados bench is
> using 16 threads.
>
> If I do the 1 thread I will get something like yours[1], if I do the same
> on the ssd pool, I get this[2]. And if I remove the 3x times replication on
> the ssd pool, this[3], and the 16 threads ssd pool with 3x on[4]
>
> Side note is that I did not fully tune my cluster on performance, I have
> still processors doing frequency/powerstate switching. Have slower hdd sata
> drives combined with faster sas. But this fits my use case.
>
> What I have should not be of interest to you. You have to determine what
> you need, and describe your use case, then there are quite a few good
> people here that can advice you how to realize that, or tell you it is not
> possible with ceph ;)
>
>
> [@~]# rados bench -t 1 -p rbd 10 write
> hints = 1
> Maintaining 1 concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 10 seconds or 0 objects
> Object prefix: benchmark_data_c01_3768767
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
>     0       0         0         0         0         0           -
>  0
>     1       1         5         4   15.9973        16    0.278477
> 0.240159
>     2       1        10         9   17.9973        20    0.162663
> 0.219858
>     3       1        17        16     21.33        28     0.21535
> 0.181435
>     4       1        26        25   24.9965        36    0.154064
> 0.158931
>     5       1        33        32   25.5966        28    0.119773
> 0.153031
>     6       1        42        41   27.3295        36    0.064895
> 0.144242
>     7       1        50        49   27.9962        32    0.192591
> 0.142036
>     8       1        59        58   28.9961        36    0.108623
> 0.137699
>     9       1        69        68   30.2183        40   0.0684741
> 0.132143
>    10       1        78        77    30.796        36    0.118075
>  0.12872
> Total time run:         10.1903
> Total writes made:      79
> Write size:             4194304
> Object size:            4194304
> Bandwidth (MB/sec):     31.01
> Stddev Bandwidth:       7.78603
> Max bandwidth (MB/sec): 40
> Min bandwidth (MB/sec): 16
> Average IOPS:           7
> Stddev IOPS:            1.94651
> Max IOPS:               10
> Min IOPS:               4
> Average Latency(s):     0.128988
> Stddev Latency(s):      0.0571245
> Max latency(s):         0.385165
> Min latency(s):         0.0608502
> Cleaning up (deleting benchmark objects)
> Removed 79 objects
> Clean up completed and total clean up time :2.49933
>
> [2]
> [@~]# rados bench -t 1 -p rbd.ssd 10 write
> hints = 1
> Maintaining 1 concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 10 seconds or 0 objects
> Object prefix: benchmark_data_c01_3769249
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
>     0       0         0         0         0         0           -
>  0
>     1       1        39        38   151.992       152   0.0318137
>  0.0258572
>     2       1        80        79   157.985       164   0.0239471
>  0.0250284
>     3       1       122       121   161.315       168   0.0240444
>  0.0247604
>     4       1       163       162   161.981       164   0.0270316
> 0.024625
>     5       1       204       203    162.38       164   0.0235799
>  0.0245714
>     6       1       246       245   163.313       168   0.0296698
>  0.0244574
>     7       1       286       285   162.836       160   0.0232353
>  0.0245383
>     8       1       326       325   162.479       160   0.0236261
>  0.0245476
>     9       1       367       366   162.646       164   0.0249223
>  0.0245132
>    10       1       408       407   162.779       164   0.0229952
>  0.0245034
> Total time run:         10.0277
> Total writes made:      409
> Write size:             4194304
> Object size:            4194304
> Bandwidth (MB/sec):     163.149
> Stddev Bandwidth:       4.63801
> Max bandwidth (MB/sec): 168
> Min bandwidth (MB/sec): 152
> Average IOPS:           40
> Stddev IOPS:            1.1595
> Max IOPS:               42
> Min IOPS:               38
> Average Latency(s):     0.0245153
> Stddev Latency(s):      0.00212425
> Max latency(s):         0.0343171
> Min latency(s):         0.0202639
> Cleaning up (deleting benchmark objects)
> Removed 409 objects
> Clean up completed and total clean up time :0.521216
>
> [3]
> [@~]# rados bench -t 1 -p rbd.ssd.r1 10 write
> hints = 1
> Maintaining 1 concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 10 seconds or 0 objects
> Object prefix: benchmark_data_c01_3769477
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
>     0       0         0         0         0         0           -
>  0
>     1       1        62        61    243.98       244   0.0149218
>  0.0162273
>     2       1       130       129   257.972       272   0.0144574
>  0.0154287
>     3       1       198       197   262.639       272   0.0144917
>  0.0151589
>     4       1       266       265   264.973       272   0.0156794
>  0.0150565
>     5       1       333       332   265.572       268   0.0149153
>  0.0150315
>     6       1       401       400   266.636       272   0.0143737
> 0.014966
>     7       1       469       468   267.399       272   0.0155345
>  0.0149459
>     8       1       536       535   267.471       268   0.0171765
>  0.0149397
>     9       1       604       603   267.971       272   0.0168833
>  0.0149184
>    10       1       672       671    268.37       272   0.0145986
>  0.0148998
> Total time run:         10.0309
> Total writes made:      673
> Write size:             4194304
> Object size:            4194304
> Bandwidth (MB/sec):     268.371
> Stddev Bandwidth:       8.73308
> Max bandwidth (MB/sec): 272
> Min bandwidth (MB/sec): 244
> Average IOPS:           67
> Stddev IOPS:            2.18327
> Max IOPS:               68
> Min IOPS:               61
> Average Latency(s):     0.014903
> Stddev Latency(s):      0.00209157
> Max latency(s):         0.0434543
> Min latency(s):         0.0111273
> Cleaning up (deleting benchmark objects)
> Removed 673 objects
> Clean up completed and total clean up time :0.69776
>
> [4]
> [@~]# rados bench -p rbd.ssd 10 write
> hints = 1
> Maintaining 16 concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 10 seconds or 0 objects
> Object prefix: benchmark_data_c01_3771109
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
>     0       0         0         0         0         0           -
>  0
>     1      16       198       182   727.843       728   0.0647907
>  0.0820398
>     2      16       400       384    767.86       808   0.0977614
>  0.0812242
>     3      16       603       587   782.536       812   0.0322522
>  0.0801139
>     4      16       801       785   784.878       792   0.0621462
>  0.0804142
>     5      16       996       980   783.883       780   0.0854195
>  0.0807731
>     6      16      1202      1186   790.546       824       0.041
>  0.0805795
>     7      16      1408      1392   795.307       824    0.122898
>  0.0798532
>     8      16      1608      1592    795.88       800   0.0382256
>  0.0802024
>     9      16      1791      1775   788.773       732   0.0480604
>  0.0807028
>    10      16      1997      1981   792.286       824   0.0581529
> 0.080433
> Total time run:         10.07
> Total writes made:      1997
> Write size:             4194304
> Object size:            4194304
> Bandwidth (MB/sec):     793.249
> Stddev Bandwidth:       35.9481
> Max bandwidth (MB/sec): 824
> Min bandwidth (MB/sec): 728
> Average IOPS:           198
> Stddev IOPS:            8.98703
> Max IOPS:               206
> Min IOPS:               182
> Average Latency(s):     0.0805776
> Stddev Latency(s):      0.0339439
> Max latency(s):         0.255658
> Min latency(s):         0.0212272
> Cleaning up (deleting benchmark objects)
> Removed 1997 objects
> Clean up completed and total clean up time :0.179425
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux