Re: Performence test on ceph v0.23 + EXT4 and Btrfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 29, 2010 at 6:55 PM, Jeff Wu <cpwu@xxxxxxxxxxxxx> wrote:
>> Could you run "ceph osd tell * bench", then run "ceph -w", and report
>> the results? (That'll just run local benchmarking on the OSD to report
>> the approximate write speed it's capable of.)
>
> I run six times for the command: "$ sudo ceph osd tell 0/1 bench "
> use "$ sudo ceph -w" to get the following results:
>
> osd0 : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 12.906598 sec
> at 49775 KB/sec
> osd1 : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 21.023294 sec
> at 49384 KB/sec
> osd0 : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 12.834682 sec
> at 51535 KB/sec
> osd1 : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 20.792697 sec
> at 37547 KB/sec
> osd0  : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 13.058412 sec
> at 77191 KB/sec
> osd1  : [INF] bench: wrote 1024 MB in blocks of 4096 KB in 21.113612 sec
> at 47369 KB/sec
Okay, those are a bit slow but reasonable. Based on these I'd expect
you to generally manage about 40-50MB/s (since everything is
replicated; in a 2-disk configuration it'll just be the speed of your
slowest disk), assuming a properly configured system.

>> You can also run "rados -p data bench 60 write", and then "rados -p
>> data bench 0 seq" to get a simpler (better understood) performance
>> test.
>
> I run twice for the command: "rados -p data bench 60 write" ,
> Get the results:
> $ sudo rados -p data bench 60 write
> ..........................
> ..........................
>
> Total time run:        76.182225
> Total writes made:     121
> Write size:            4194304
> Bandwidth (MB/sec):    6.219
>
> Average Latency:       13.3068
> Max latency:           23.9986
> Min latency:           7.01847
WOAH. That's a lot of latency. Rather more than I'd expect to get just
from seek times in a non-journaled environment. What's the round-trip
time to ping the OSDs from your client? Are your disks okay?

> but run  "$ sudo rados -p data bench 0 seq"
> fail to get the results, Maybe it's a bug,ceph version 0.23.
Oh right, sorry. We put in a quick fix to make the write benchmark
scale across multiple client writers and forgot to adjust the read
benchmark. *oops*
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux