RBD is very slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

i have 2 OSD's and 3 MON's. Every OSD is on a 2,5TB LVM/EXT4 storage.
Why is the access to the rbd device so slow and what is meaning with
"Stddev Bandwidth" in rados bench? See the below statistics:

#Create a 1GB file on local Storage and test a local OSD for bandwidth
cd /ceph/; dd if=/dev/zero of=test.img bs=1GB count=1 oflag=direct
1+0 Datensätze ein
1+0 Datensätze aus
1000000000 Bytes (1,0 GB) kopiert, 1,8567 s, 539 MB/s


ceph tell osd.0 bench
{ "bytes_written": 1073741824,
  "blocksize": 4194304,
  "bytes_per_sec": "559979881.000000"}


#Create a pool and test the bandwidth
rados -p vmfs bench 30 write
 Maintaining 16 concurrent writes of 4194304 bytes for up to 30 seconds
or 0 objects
 Object prefix: benchmark_data_ceph-mh-3_9777
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
     1      16        94        78   311.879       312  0.206434  0.182676
     2      16       191       175   349.897       388  0.177737  0.175261
     3      16       282       266   354.578       364  0.244247  0.173725
     4      15       372       357   356.918       364  0.167184  0.175167
     5      16       457       441    352.72       336  0.178794  0.177867
     6      16       553       537   357.923       384  0.244694  0.175324
     7      16       645       629   359.351       368  0.193504  0.175503
     8      16       725       709   354.424       320  0.235158  0.177618
     9      15       810       795   353.244       344  0.166452  0.179567
    10      16       888       872   348.715       308   0.15287  0.181171
    11      16       975       959    348.64       348  0.114494  0.181629
    12      16      1066      1050   349.864       364  0.233927  0.181363
    13      15      1136      1121   344.795       284  0.128635  0.184239
    14      16      1231      1215   347.019       376  0.192001  0.182952
    15      16      1313      1297   345.747       328  0.200144  0.183385
    16      16      1334      1318   329.389        84  0.146472  0.183347
    17      16      1416      1400   329.303       328   0.14064  0.193126
    18      16      1500      1484    329.67       336  0.145509  0.193292
    19      15      1594      1579   332.315       380  0.178459  0.191693
2013-12-26 17:15:24.083583min lat: 0.070546 max lat: 1.17479 avg lat:
0.191293
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
    20      16      1665      1649   329.697       280  0.147855  0.191293
    21      16      1701      1685   320.854       144  0.169078  0.198584
    22      16      1750      1734   315.178       196  0.326568  0.201798
    23      16      1805      1789   311.038       220  0.280829  0.203922
    24      16      1882      1866   310.903       308  0.249915  0.204755
    25      15      1966      1951   312.064       340  0.178078  0.204584
    26      16      2030      2014   309.731       252  0.181972  0.205929
    27      16      2085      2069   306.406       220  0.423718  0.207918
    28      16      2179      2163   308.888       376   0.14442  0.206487
    29      16      2252      2236   308.302       292  0.166282  0.206932
    30      16      2325      2309   307.756       292  0.271987  0.207166
 Total time run:         30.272445
Total writes made:      2325
Write size:             4194304
Bandwidth (MB/sec):     307.210

Stddev Bandwidth:       90.9029
Max bandwidth (MB/sec): 388
Min bandwidth (MB/sec): 0
Average Latency:        0.208258
Stddev Latency:         0.122409
Max latency:            1.17479
Min latency:            0.070546

#Test the bandwidth on a rbd device
rbd -p vmfs create test --size 1024
rbd -p vmfs map test
dd if=/dev/rbd1 of=test.img bs=1GB count=1 oflag=direct
1+0 Datensätze ein
1+0 Datensätze aus
1000000000 Bytes (1,0 GB) kopiert, 12,4647 s, 80,2 MB/s


Thanks, Markus


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux