Re: rbd create error with 0.26

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 15 May 2011, Simon Tian wrote:
> > What is the IO size?  Is write_test_data synchronous?
> >
> > For simple write benchmarking you can also use
> >        rados mkpool foo
> >        rados -p foo bench <seconds> write -b <blocksize> -t <threads>
> >
> > and you'll see latency and throughput.  Blocksize defaults to 4M and
> > "threads" (parallel IOs) default to 16, IIRC.
> 
> 
> Hi, Sage:
> 
> I just did the bench:
> rados -p rbd bench 60 write -t 64   and    rados -p data bench 60 write -t 64
> the avg throughput is about 46MB/s, one of the result is as follow.
> But why it's slow with rbd api from <rbd/librbd.h>?

The problem is that your test is only doing a single IO at a time.  The 
request latency is relatively high because the data has to pass over the 
network to the OSD (and, for writes, do it again to be replicated), so 
the client node spends a lot of time waiting around.  The rados tool, by 
default, keeps 16 concurrent IOs in flight.

You'll want to look at the async (aio) read/write calls, or use multiple 
threads.

sage


> And I tried testlibrbdpp.cc, the result is more or less the same.
> The attachments are the test codes. Could you run it on your platform please?
> 
>  sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
>   40      63       482       419   41.8884        44   2.40044   2.40979
>   41      63       494       431   42.0372        48   2.11044     2.406
>   42      64       506       442   42.0837        44   2.11266   2.40229
>   43      63       518       455   42.3139        52   2.33468    2.3982
>   44      63       527       464   42.1703        36    2.4403   2.39559
>   45      63       539       476   42.2995        48   2.19768   2.39413
>   46      63       551       488   42.4232        48   2.51232    2.3928
>   47      63       563       500   42.5416        48   2.18025   2.38958
>   48      63       572       509   42.4051        36   2.27111   2.38791
>   49      63       584       521    42.519        48   2.41684   2.38695
>   50      63       596       533   42.6284        48   2.11087     2.384
>   51      63       608       545   42.7335        48   2.18147   2.37925
>   52      63       620       557   42.8345        48   2.45287   2.37787
>   53      63       629       566   42.7054        36   2.45187   2.37801
>   54      63       644       581   43.0255        60   2.22403   2.37477
>   55      63       653       590   42.8976        36   2.22782   2.37157
>   56      63       668       605   43.2026        60   2.20638   2.36597
>   57      63       677       614   43.0761        36   2.19628   2.36209
>   58      63       689       626   43.1608        48   2.18262   2.35762
>   59      63       704       641   43.4459        60   2.27029   2.35352
> min lat: 1.87981 max lat: 5.56194 avg lat: 2.34944
>  sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
>   60      63       716       653   43.5215        48   2.27835   2.34944
>   61      64       717       653    42.808         0         -   2.34944
>   62      63       717       654   42.1821         2   2.25694   2.34929
> Total time run:        62.274719
> Total writes made:     717
> Write size:            4194304
> Bandwidth (MB/sec):    46.054
> 
> Average Latency:       5.453
> Max latency:           62.0339
> Min latency:           1.87981
> N?????r??y???b?X??ÿÿv?^?)ÿÿ{.n?+???z?]z?{ay?ÿÿÿÿ?,j??f???h???z??w??????j:+v???w?j?m????????zZ+?????ÿÿj"??!?i

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux