Re: rbd create error with 0.26

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Great!   Thx very much...
It's just that reason!!!
I just did "rados -p data bench 60 write -t 1 -b 1024", the speed is
more or less the same.

So I changed the IO_SIZE to 1MB and above in my test codes, the speed
became perfect.

There should be a buffer policy in the libceph api, so speed is not
hurt badly by the small IO_SIZE, is it right?  Hmm, where can I find
the buffer policy in the ceph code?

Thx!
Simon

2011/5/16 Sage Weil <sage@xxxxxxxxxxxx>:
> On Sun, 15 May 2011, Simon Tian wrote:
>> > What is the IO size? ÂIs write_test_data synchronous?
>> >
>> > For simple write benchmarking you can also use
>> > Â Â Â Ârados mkpool foo
>> > Â Â Â Ârados -p foo bench <seconds> write -b <blocksize> -t <threads>
>> >
>> > and you'll see latency and throughput. ÂBlocksize defaults to 4M and
>> > "threads" (parallel IOs) default to 16, IIRC.
>>
>>
>> Hi, Sage:
>>
>> I just did the bench:
>> rados -p rbd bench 60 write -t 64  and  Ârados -p data bench 60 write -t 64
>> the avg throughput is about 46MB/s, one of the result is as follow.
>> But why it's slow with rbd api from <rbd/librbd.h>?
>
> The problem is that your test is only doing a single IO at a time. ÂThe
> request latency is relatively high because the data has to pass over the
> network to the OSD (and, for writes, do it again to be replicated), so
> the client node spends a lot of time waiting around. ÂThe rados tool, by
> default, keeps 16 concurrent IOs in flight.
>
> You'll want to look at the async (aio) read/write calls, or use multiple
> threads.
>
> sage
>
>
>> And I tried testlibrbdpp.cc, the result is more or less the same.
>> The attachments are the test codes. Could you run it on your platform please?
>>
>> Âsec Cur ops  started Âfinished Âavg MB/s Âcur MB/s Âlast lat  avg lat
>> Â 40 Â Â Â63 Â Â Â 482 Â Â Â 419 Â 41.8884 Â Â Â Â44 Â 2.40044 Â 2.40979
>> Â 41 Â Â Â63 Â Â Â 494 Â Â Â 431 Â 42.0372 Â Â Â Â48 Â 2.11044 Â Â 2.406
>> Â 42 Â Â Â64 Â Â Â 506 Â Â Â 442 Â 42.0837 Â Â Â Â44 Â 2.11266 Â 2.40229
>> Â 43 Â Â Â63 Â Â Â 518 Â Â Â 455 Â 42.3139 Â Â Â Â52 Â 2.33468 Â Â2.3982
>> Â 44 Â Â Â63 Â Â Â 527 Â Â Â 464 Â 42.1703 Â Â Â Â36 Â Â2.4403 Â 2.39559
>> Â 45 Â Â Â63 Â Â Â 539 Â Â Â 476 Â 42.2995 Â Â Â Â48 Â 2.19768 Â 2.39413
>> Â 46 Â Â Â63 Â Â Â 551 Â Â Â 488 Â 42.4232 Â Â Â Â48 Â 2.51232 Â Â2.3928
>> Â 47 Â Â Â63 Â Â Â 563 Â Â Â 500 Â 42.5416 Â Â Â Â48 Â 2.18025 Â 2.38958
>> Â 48 Â Â Â63 Â Â Â 572 Â Â Â 509 Â 42.4051 Â Â Â Â36 Â 2.27111 Â 2.38791
>> Â 49 Â Â Â63 Â Â Â 584 Â Â Â 521 Â Â42.519 Â Â Â Â48 Â 2.41684 Â 2.38695
>> Â 50 Â Â Â63 Â Â Â 596 Â Â Â 533 Â 42.6284 Â Â Â Â48 Â 2.11087 Â Â 2.384
>> Â 51 Â Â Â63 Â Â Â 608 Â Â Â 545 Â 42.7335 Â Â Â Â48 Â 2.18147 Â 2.37925
>> Â 52 Â Â Â63 Â Â Â 620 Â Â Â 557 Â 42.8345 Â Â Â Â48 Â 2.45287 Â 2.37787
>> Â 53 Â Â Â63 Â Â Â 629 Â Â Â 566 Â 42.7054 Â Â Â Â36 Â 2.45187 Â 2.37801
>> Â 54 Â Â Â63 Â Â Â 644 Â Â Â 581 Â 43.0255 Â Â Â Â60 Â 2.22403 Â 2.37477
>> Â 55 Â Â Â63 Â Â Â 653 Â Â Â 590 Â 42.8976 Â Â Â Â36 Â 2.22782 Â 2.37157
>> Â 56 Â Â Â63 Â Â Â 668 Â Â Â 605 Â 43.2026 Â Â Â Â60 Â 2.20638 Â 2.36597
>> Â 57 Â Â Â63 Â Â Â 677 Â Â Â 614 Â 43.0761 Â Â Â Â36 Â 2.19628 Â 2.36209
>> Â 58 Â Â Â63 Â Â Â 689 Â Â Â 626 Â 43.1608 Â Â Â Â48 Â 2.18262 Â 2.35762
>> Â 59 Â Â Â63 Â Â Â 704 Â Â Â 641 Â 43.4459 Â Â Â Â60 Â 2.27029 Â 2.35352
>> min lat: 1.87981 max lat: 5.56194 avg lat: 2.34944
>> Âsec Cur ops  started Âfinished Âavg MB/s Âcur MB/s Âlast lat  avg lat
>> Â 60 Â Â Â63 Â Â Â 716 Â Â Â 653 Â 43.5215 Â Â Â Â48 Â 2.27835 Â 2.34944
>> Â 61 Â Â Â64 Â Â Â 717 Â Â Â 653 Â Â42.808 Â Â Â Â 0 Â Â Â Â - Â 2.34944
>> Â 62 Â Â Â63 Â Â Â 717 Â Â Â 654 Â 42.1821 Â Â Â Â 2 Â 2.25694 Â 2.34929
>> Total time run: Â Â Â Â62.274719
>> Total writes made: Â Â 717
>> Write size: Â Â Â Â Â Â4194304
>> Bandwidth (MB/sec): Â Â46.054
>>
>> Average Latency: Â Â Â 5.453
>> Max latency: Â Â Â Â Â 62.0339
>> Min latency: Â Â Â Â Â 1.87981
>> N?????r??y???b?X??ÃÃv?^?)ÃÃ{.n?+???z?]z?{ay? ÃÃÃÃ?,j ??f???h???z? ?w??? ???j:+v???w?j?m???? ????zZ+?????ÃÃj"??!?i
ÿô.nlj·Ÿ®‰­†+%ŠË±é¥Šwÿº{.nlj·œz˜ÿuëø¡Ü}©ž²ÆzÚj:+v‰¨þø®w¥þŠàÞ¨è&¢)ß«a¶Úÿûz¹ÞúŽŠÝjÿŠwèf



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux