Re: rbd performance issue - can't find bottleneck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/18/2015 12:23 PM, Mark Nelson wrote:
so.. in order to increase performance, do I need to change the ssd
drives?

I'm just guessing, but because your read performance is slow as well,
you may multiple issues going on.  The Intel 530 being slow at O_DSYNC
writes is one of them, but it's possible there is something else too. If
I were in your position I think I'd try to beg/borrw/steal a single DC
S3700 or even 520 (despite it's presumed lack of safety) and just see
how a single OSD cluster using it does on your setup before replacing
everything.


Oh, sorry - this was my bad, I was doing different test with different setups to find out what might be the problem. I thought that maybe the mellanox network hardware/setup is the problem (wouldn't know why, but I wanted to check) so I switched the servers to use 1Gbps network cards and thus the slow read results. After I switched back to 56Gbps network, sequential read/write tests are satisfactory:

root@cf03:/ceph/tmp# dd if=/dev/zero of=test bs=100M count=100 oflag=direct
100+0 records in
100+0 records out
10485760000 bytes (10 GB) copied, 27.0479 s, 388 MB/s

root@cf03:/ceph/tmp# dd if=test of=/dev/null bs=100M iflag=direct
100+0 records in
100+0 records out
10485760000 bytes (10 GB) copied, 7.30296 s, 1.4 GB/s

and now rados bench shows:

root@cf03:~# rados -p rbd bench 30 rand
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
     1      16       208       192   767.782       768  0.084049 0.0796911
     2      16       390       374   747.833       728  0.055108 0.0834168
     3      16       579       563   750.523       756  0.080945 0.0841484
     4      16       756       740   739.865       708  0.119879 0.0853113
     5      16       942       926   740.668       744  0.131534  0.085389
     6      16      1128      1112   741.207       744  0.085159 0.0857775
     7      16      1314      1298   741.587       744  0.137615 0.0857103
     8      16      1496      1480   739.877       728  0.047122 0.0858808
     9      16      1678      1662   738.548       728  0.118557 0.0860778
    10      16      1866      1850   739.882       752   0.07375 0.0861203
    11      16      2054      2038   740.974       752  0.053814 0.0860436
    12      16      2247      2231    743.55       772  0.101077 0.0857194
    13      16      2430      2414   742.652       732  0.038217 0.0856958
    14      16      2592      2576   735.886       648  0.014755 0.0864883
    15      16      2764      2748   732.688       688  0.125262 0.0870332
    16      16      2934      2918    729.39       680  0.144276 0.0873883
    17      16      3109      3093   727.655       700   0.05022 0.0876425
    18      16      3274      3258   723.892       660  0.027348 0.0880826
    19      16      3428      3412   718.209       616  0.145429 0.0888024
    20      16      3590      3574   714.695       648  0.145609 0.0892346
    21      16      3753      3737   711.704       652  0.146557   0.08958
    22      16      3914      3898   708.623       644  0.164886 0.0900086
    23      16      4077      4061   706.158       652  0.021976 0.0903442
    24      16      4243      4227   704.398       664  0.013213 0.0905628
    25      16      4409      4393   702.779       664  0.039111 0.0908182
    26      16      4576      4560   701.438       668  0.179205 0.0909782
    27      16      4744      4728   700.344       672  0.176603 0.0911509
    28      16      4924      4908   701.043       720  0.062736 0.0911056
    29      16      5107      5091   702.107       732  0.103679 0.0910063
    30      16      5294      5278   703.633       748  0.078924 0.0908063
 Total time run:        30.105242
Total reads made:     5294
Read size:            4194304
Bandwidth (MB/sec):    703.399

Average Latency:       0.0909628
Max latency:           0.198346
Min latency:           0.00676


..but unfortunately fio still shows low iops - 2-4k...

J

--
Jacek Jarosiewicz
Administrator Systemów Informatycznych

----------------------------------------------------------------------------------------
SUPERMEDIA Sp. z o.o. z siedzibą w Warszawie
ul. Senatorska 13/15, 00-075 Warszawa
Sąd Rejonowy dla m.st.Warszawy, XII Wydział Gospodarczy Krajowego Rejestru Sądowego,
nr KRS 0000029537; kapitał zakładowy 42.756.000 zł
NIP: 957-05-49-503
Adres korespondencyjny: ul. Jubilerska 10, 04-190 Warszawa

----------------------------------------------------------------------------------------
SUPERMEDIA ->   http://www.supermedia.pl
dostep do internetu - hosting - kolokacja - lacza - telefonia
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux