Great! Thx very much... It's just that reason!!! I just did "rados -p data bench 60 write -t 1 -b 1024", the speed is more or less the same. So I changed the IO_SIZE to 1MB and above in my test codes, the speed became perfect. There should be a buffer policy in the libceph api, so speed is not hurt badly by the small IO_SIZE, is it right? Hmm, where can I find the buffer policy in the ceph code? Thx! Simon 2011/5/16 Sage Weil <sage@xxxxxxxxxxxx>: > On Sun, 15 May 2011, Simon Tian wrote: >> > What is the IO size? ÂIs write_test_data synchronous? >> > >> > For simple write benchmarking you can also use >> >    Ârados mkpool foo >> >    Ârados -p foo bench <seconds> write -b <blocksize> -t <threads> >> > >> > and you'll see latency and throughput. ÂBlocksize defaults to 4M and >> > "threads" (parallel IOs) default to 16, IIRC. >> >> >> Hi, Sage: >> >> I just did the bench: >> rados -p rbd bench 60 write -t 64  and  Ârados -p data bench 60 write -t 64 >> the avg throughput is about 46MB/s, one of the result is as follow. >> But why it's slow with rbd api from <rbd/librbd.h>? > > The problem is that your test is only doing a single IO at a time. ÂThe > request latency is relatively high because the data has to pass over the > network to the OSD (and, for writes, do it again to be replicated), so > the client node spends a lot of time waiting around. ÂThe rados tool, by > default, keeps 16 concurrent IOs in flight. > > You'll want to look at the async (aio) read/write calls, or use multiple > threads. > > sage > > >> And I tried testlibrbdpp.cc, the result is more or less the same. >> The attachments are the test codes. Could you run it on your platform please? >> >> Âsec Cur ops  started Âfinished Âavg MB/s Âcur MB/s Âlast lat  avg lat >>  40   Â63    482    419  41.8884    Â44  2.40044  2.40979 >>  41   Â63    494    431  42.0372    Â48  2.11044   2.406 >>  42   Â64    506    442  42.0837    Â44  2.11266  2.40229 >>  43   Â63    518    455  42.3139    Â52  2.33468  Â2.3982 >>  44   Â63    527    464  42.1703    Â36  Â2.4403  2.39559 >>  45   Â63    539    476  42.2995    Â48  2.19768  2.39413 >>  46   Â63    551    488  42.4232    Â48  2.51232  Â2.3928 >>  47   Â63    563    500  42.5416    Â48  2.18025  2.38958 >>  48   Â63    572    509  42.4051    Â36  2.27111  2.38791 >>  49   Â63    584    521  Â42.519    Â48  2.41684  2.38695 >>  50   Â63    596    533  42.6284    Â48  2.11087   2.384 >>  51   Â63    608    545  42.7335    Â48  2.18147  2.37925 >>  52   Â63    620    557  42.8345    Â48  2.45287  2.37787 >>  53   Â63    629    566  42.7054    Â36  2.45187  2.37801 >>  54   Â63    644    581  43.0255    Â60  2.22403  2.37477 >>  55   Â63    653    590  42.8976    Â36  2.22782  2.37157 >>  56   Â63    668    605  43.2026    Â60  2.20638  2.36597 >>  57   Â63    677    614  43.0761    Â36  2.19628  2.36209 >>  58   Â63    689    626  43.1608    Â48  2.18262  2.35762 >>  59   Â63    704    641  43.4459    Â60  2.27029  2.35352 >> min lat: 1.87981 max lat: 5.56194 avg lat: 2.34944 >> Âsec Cur ops  started Âfinished Âavg MB/s Âcur MB/s Âlast lat  avg lat >>  60   Â63    716    653  43.5215    Â48  2.27835  2.34944 >>  61   Â64    717    653  Â42.808     0     -  2.34944 >>  62   Â63    717    654  42.1821     2  2.25694  2.34929 >> Total time run:    Â62.274719 >> Total writes made:   717 >> Write size:      Â4194304 >> Bandwidth (MB/sec):  Â46.054 >> >> Average Latency:    5.453 >> Max latency:      62.0339 >> Min latency:      1.87981 >> N?????r??y???b?X??ÃÃv?^?)ÃÃ{.n?+???z?]z?{ay? ÃÃÃÃ?,j ??f???h???z? ?w??? ???j:+v???w?j?m???? ????zZ+?????ÃÃj"??!?i ÿô.nÇ·®+%˱é¥wÿº{.nÇ·zÿuëø¡Ü}©²ÆzÚj:+v¨þø®w¥þàÞ¨è&¢)ß«a¶Úÿûz¹ÞúÝjÿwèf