Re: RBD is very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

i don't understand this: Why is a rados bench with an object size of 1MB
slower as a bench with object size 4MB? I attached the config in hope
this helps. My version of ceph is 0.72.2 under the Debian stable
distribution. The public and cluster interfaces are on a 10 GB ethernet
network with jumbo frames (mtu 9000) activated.



rados -p rbd bench 30 write -b 1048576
 Maintaining 16 concurrent writes of 1048576 bytes for up to 30 seconds
or 0 objects
 Object prefix: benchmark_data_ceph-mh-3_1364
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0      16        16         0         0         0         -         0
     1      16       320       304   302.171       304  0.033876 0.0511531
     2      15       613       598   298.064       294  0.027616 0.0531646
     3      16       899       883   293.695       285  0.049066 0.0538804
     4      15      1235      1220   304.493       337  0.025284 0.0523638
     5      16      1550      1534   306.384       314  0.039108 0.0519398
     6      16      1864      1848   307.646       314  0.020707 0.0518715
     7      15      2156      2141   305.543       293  0.055784 0.0520902
     8      16      2439      2423   302.598       282  0.098889 0.0524547
     9      16      2721      2705   300.306       282  0.107799 0.0527373
    10      16      2944      2928   292.576       223  0.145834 0.0543772
    11      16      3233      3217   292.246       289  0.038786  0.054565
    12      16      3477      3461   288.225       244   0.03195 0.0553085
    13      16      3755      3739   287.436       278  0.033953 0.0555768
    14      15      4043      4028   287.543       289  0.046289 0.0555471
    15      16      4332      4316   287.571       288   0.04913 0.0555415
    16      15      4638      4623   288.783       307  0.030713 0.0553398
    17      15      4940      4925   289.539       302  0.041373 0.0551877
    18      16      5203      5187   288.007       262    0.0382 0.0554286
    19      15      5446      5431   285.685       244  0.036382 0.0559452
2013-12-28 17:09:49.764178min lat: 0.012016 max lat: 0.204488 avg lat:
0.0562318
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
    20      16      5690      5674    283.55       243  0.085217 0.0562318
    21      15      5984      5969   284.093       295  0.109059 0.0560807
    22      16      6249      6233   283.177       264  0.097483  0.056284
    23      16      6506      6490   282.037       257  0.020717 0.0566822
    24      16      6806      6790   282.783       300   0.02269 0.0564651
    25      15      7017      7002   279.947       212  0.078625 0.0570445
    26      16      7321      7305   280.832       303  0.094109 0.0569165
    27      16      7631      7615   281.909       310  0.027354 0.0567099
    28      15      7925      7910   282.373       295  0.038324 0.0566173
    29      16      8223      8207   282.876       297  0.024542 0.0564482
    30      16      8488      8472   282.279       265  0.112417 0.0565497
 Total time run:         30.073757
Total writes made:      8488
Write size:             1048576
Bandwidth (MB/sec):     282.239

Stddev Bandwidth:       57.942
Max bandwidth (MB/sec): 337
Min bandwidth (MB/sec): 0
Average Latency:        0.0566741
Stddev Latency:         0.029957
Max latency:            0.311011
Min latency:            0.012016


rados -p rbd bench 30 write -b 4194304
 Maintaining 16 concurrent writes of 4194304 bytes for up to 30 seconds
or 0 objects
 Object prefix: benchmark_data_ceph-mh-3_1398
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0      16        16         0         0         0         -         0
     1      15       111        96   383.507       384  0.220555  0.149972
     2      16       223       207   413.685       444  0.150163  0.148653
     3      16       324       308   410.435       404   0.06267  0.150363
     4      15       414       399   398.818       364  0.258535  0.156035
     5      16       513       497   397.442       392  0.165043  0.157399
     6      15       612       597   397.857       400  0.227077  0.158478
     7      16       689       673   384.445       304    0.2731  0.163187
     8      16       750       734   366.888       244   0.13902  0.167857
     9      16       847       831   369.225       388  0.104178    0.1715
    10      16       962       946   378.296       460  0.167365  0.167801
    11      16      1055      1039   377.719       372  0.123195  0.168256
    12      16      1124      1108   369.241       276  0.244183  0.171926
    13      16      1215      1199   368.833       364  0.095592  0.171366
    14      16      1293      1277   364.771       312  0.160458  0.174529
    15      16      1392      1376    366.85       396  0.133148  0.173578
    16      16      1422      1406   351.422       120  0.744613  0.175078
    17      16      1456      1440    338.75       136  0.188058  0.187931
    18      16      1573      1557   345.925       468  0.138695  0.184426
    19      16      1601      1585   333.614       112  0.145203  0.190529
2013-12-28 17:12:08.714308min lat: 0.045043 max lat: 1.57187 avg lat:
0.191024
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
    20      16      1636      1620   323.933       140  0.120542  0.191024
    21      16      1745      1729   329.266       436  0.126429  0.193505
    22      16      1848      1832   333.023       412  0.083329  0.191634
    23      16      1950      1934    336.28       408  0.068441  0.189515
    24      15      2044      2029   338.099       380  0.222631   0.18854
    25      16      2143      2127   340.253       392  0.152058  0.187696
    26      16      2224      2208   339.626       324  0.168216  0.187752
    27      16      2303      2287   338.748       316  0.203797  0.188278
    28      16      2403      2387   340.933       400  0.101494  0.187031
    29      16      2496      2480   341.979       372  0.132962  0.186758
    30      16      2581      2565   341.912       340  0.204083  0.186606
 Total time run:         30.178617
Total writes made:      2582
Write size:             4194304
Bandwidth (MB/sec):     342.229

Stddev Bandwidth:       115.585
Max bandwidth (MB/sec): 468
Min bandwidth (MB/sec): 0
Average Latency:        0.186847
Stddev Latency:         0.139004
Max latency:            1.57187
Min latency:            0.044427

Am 26.12.2013 17:32, schrieb Markus Martinet:
> Hello,
>
> i have 2 OSD's and 3 MON's. Every OSD is on a 2,5TB LVM/EXT4 storage.
> Why is the access to the rbd device so slow and what is meaning with
> "Stddev Bandwidth" in rados bench? See the below statistics:
>
> #Create a 1GB file on local Storage and test a local OSD for bandwidth
> cd /ceph/; dd if=/dev/zero of=test.img bs=1GB count=1 oflag=direct
> 1+0 Datensätze ein
> 1+0 Datensätze aus
> 1000000000 Bytes (1,0 GB) kopiert, 1,8567 s, 539 MB/s
>
>
> ceph tell osd.0 bench
> { "bytes_written": 1073741824,
>   "blocksize": 4194304,
>   "bytes_per_sec": "559979881.000000"}
>
>
> #Create a pool and test the bandwidth
> rados -p vmfs bench 30 write
>  Maintaining 16 concurrent writes of 4194304 bytes for up to 30 seconds
> or 0 objects
>  Object prefix: benchmark_data_ceph-mh-3_9777
>    sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
>      0       0         0         0         0         0         -         0
>      1      16        94        78   311.879       312  0.206434  0.182676
>      2      16       191       175   349.897       388  0.177737  0.175261
>      3      16       282       266   354.578       364  0.244247  0.173725
>      4      15       372       357   356.918       364  0.167184  0.175167
>      5      16       457       441    352.72       336  0.178794  0.177867
>      6      16       553       537   357.923       384  0.244694  0.175324
>      7      16       645       629   359.351       368  0.193504  0.175503
>      8      16       725       709   354.424       320  0.235158  0.177618
>      9      15       810       795   353.244       344  0.166452  0.179567
>     10      16       888       872   348.715       308   0.15287  0.181171
>     11      16       975       959    348.64       348  0.114494  0.181629
>     12      16      1066      1050   349.864       364  0.233927  0.181363
>     13      15      1136      1121   344.795       284  0.128635  0.184239
>     14      16      1231      1215   347.019       376  0.192001  0.182952
>     15      16      1313      1297   345.747       328  0.200144  0.183385
>     16      16      1334      1318   329.389        84  0.146472  0.183347
>     17      16      1416      1400   329.303       328   0.14064  0.193126
>     18      16      1500      1484    329.67       336  0.145509  0.193292
>     19      15      1594      1579   332.315       380  0.178459  0.191693
> 2013-12-26 17:15:24.083583min lat: 0.070546 max lat: 1.17479 avg lat:
> 0.191293
>    sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
>     20      16      1665      1649   329.697       280  0.147855  0.191293
>     21      16      1701      1685   320.854       144  0.169078  0.198584
>     22      16      1750      1734   315.178       196  0.326568  0.201798
>     23      16      1805      1789   311.038       220  0.280829  0.203922
>     24      16      1882      1866   310.903       308  0.249915  0.204755
>     25      15      1966      1951   312.064       340  0.178078  0.204584
>     26      16      2030      2014   309.731       252  0.181972  0.205929
>     27      16      2085      2069   306.406       220  0.423718  0.207918
>     28      16      2179      2163   308.888       376   0.14442  0.206487
>     29      16      2252      2236   308.302       292  0.166282  0.206932
>     30      16      2325      2309   307.756       292  0.271987  0.207166
>  Total time run:         30.272445
> Total writes made:      2325
> Write size:             4194304
> Bandwidth (MB/sec):     307.210
>
> Stddev Bandwidth:       90.9029
> Max bandwidth (MB/sec): 388
> Min bandwidth (MB/sec): 0
> Average Latency:        0.208258
> Stddev Latency:         0.122409
> Max latency:            1.17479
> Min latency:            0.070546
>
> #Test the bandwidth on a rbd device
> rbd -p vmfs create test --size 1024
> rbd -p vmfs map test
> dd if=/dev/rbd1 of=test.img bs=1GB count=1 oflag=direct
> 1+0 Datensätze ein
> 1+0 Datensätze aus
> 1000000000 Bytes (1,0 GB) kopiert, 12,4647 s, 80,2 MB/s
>
>
> Thanks, Markus
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[global]
	public network = 192.168.0.3/24
	cluster network = 192.168.1.3/24
	auth cluster required = cephx
	auth service required = cephx
	auth client required = cephx
	cephx require signatures = false
	cephx sign messages = false

	keyring = /etc/ceph/$name.keyring
	osd pool default pg num = 100
	osd pool default pgp num = 100
	osd pool default size = 2
	osd pool default min size = 1

[mon]
	mon data = /ceph/mon.$id

[mds]

[osd]
	osd data = /ceph/osd.$id
#	osd journal = /ceph_journal/osd.$id.journal
	osd journal = /ceph/osd.$id.journal
	osd journal size = 25600
	osd pool default flag hashpspool = true
	osd op threads = 4
	filestore xattr use omap = true
	filestore max sync interval = 10

[mon.0]
	host = ceph-mh-1
	mon addr = 192.168.0.3:6789

[mon.1]
	host = ceph-mh-2
	mon addr = 192.168.0.4:6789

[mon.2]
	host = ceph-mh-3
	mon addr = 192.168.0.5:6789

[osd.0]
	host = ceph-mh-1

[osd.1]
	host = ceph-mh-2

[mds.0]
	host = ceph-mh-1

[mds.1]
	host = ceph-mh-2

[client]
	rbd cache = true
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux