Thanks for you feed back it is helpful.
I may have been wrong about the default windows block size. What would be the best tests to compare native performance of the SSD disks at 4K blocks vs Ceph performance with 4K blocks? It just seems their is a huge difference in the results.
On Tue, Sep 17, 2013 at 10:56 AM, Campbell, Bill <bcampbell@xxxxxxxxxxxxxxxxxxxx> wrote:
Windows default (NTFS) is a 4k block. Are you changing the allocation unit to 8k as a default for your configuration?From: "Gregory Farnum" <greg@xxxxxxxxxxx>
To: "Jason Villalta" <jason@xxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, September 17, 2013 10:40:09 AM
Subject: Re: Ceph performance with 8K blocks._______________________________________________Your 8k-block dd test is not nearly the same as your 8k-block rados bench or SQL tests. Both rados bench and SQL require the write to be committed to disk before moving on to the next one; dd is simply writing into the page cache. So you're not going to get 460 or even 273MB/s with sync 8k writes regardless of your settings.However, I think you should be able to tune your OSDs into somewhat better numbers -- that rados bench is giving you ~300IOPs on every OSD (with a small pipeline!), and an SSD-based daemon should be going faster. What kind of logging are you running with and what configs have you set?Hopefully you can get Mark or Sam or somebody who's done some performance tuning to offer some tips as well. :)-GregOn Tuesday, September 17, 2013, Jason Villalta wrote:
Hello all,I am new to the list.I have a single machines setup for testing Ceph. It has a dual proc 6 cores(12core total) for CPU and 128GB of RAM. I also have 3 Intel 520 240GB SSDs and an OSD setup on each disk with the OSD and Journal in separate partitions formatted with ext4.My goal here is to prove just how fast Ceph can go and what kind of performance to expect when using it as a back-end storage for virtual machines mostly windows. I would also like to try to understand how it will scale IO by removing one disk of the three and doing the benchmark tests. But that is secondary. So far here are my results. I am aware this is all sequential, I just want to know how fast it can go.DD IO test of SSD disks: I am testing 8K blocks since that is the default block size of windows.dd of=ddbenchfile if=/dev/zero bs=8K count=10000008192000000 bytes (8.2 GB) copied, 17.7953 s, 460 MB/sdd if=ddbenchfile of=/dev/null bs=8K8192000000 bytes (8.2 GB) copied, 2.94287 s, 2.8 GB/s
RADOS bench test with 3 SSD disks and 4MB object size(Default):rados --no-cleanup bench -p pbench 30 writeTotal writes made: 2061Write size: 4194304Bandwidth (MB/sec): 273.004Stddev Bandwidth: 67.5237Max bandwidth (MB/sec): 352Min bandwidth (MB/sec): 0Average Latency: 0.234199Stddev Latency: 0.130874Max latency: 0.867119Min latency: 0.039318-----rados bench -p pbench 30 seqTotal reads made: 2061Read size: 4194304Bandwidth (MB/sec): 956.466Average Latency: 0.0666347Max latency: 0.208986Min latency: 0.011625This all looks like I would expect from using three disks. The problems appear to come with the 8K blocks/object size.RADOS bench test with 3 SSD disks and 8K object size(8K blocks):rados --no-cleanup bench -b 8192 -p pbench 30 writeTotal writes made: 13770Write size: 8192Bandwidth (MB/sec): 3.581Stddev Bandwidth: 1.04405Max bandwidth (MB/sec): 6.19531Min bandwidth (MB/sec): 0Average Latency: 0.0348977Stddev Latency: 0.0349212Max latency: 0.326429Min latency: 0.0019------rados bench -b 8192 -p pbench 30 seqTotal reads made: 13770Read size: 8192Bandwidth (MB/sec): 52.573Average Latency: 0.00237483Max latency: 0.006783Min latency: 0.000521So are these performance correct or is this something I missed with the testing procedure? The RADOS bench number with 8K block size are the same we see when testing performance in an VM with SQLIO. Does anyone know of any configure changes that are needed to get the Ceph performance closer to native performance with 8K blocks?Thanks in advance.--
Software Engineer #42 @ http://inktank.com | http://ceph.com
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com