On 11/28/2013 8:38 PM, Stan Hoeppner wrote: > On 11/28/2013 4:02 AM, lilofile wrote: >> thank you for your advise. now I have test multi-thread patch, the single raid5 performance improve 30%. >> >> but I have another problem,when write on single raid,write performance is approx 1.1GB/s > ... >> [1]- Done dd if=/dev/zero of=/dev/md126 count=100000 bs=1M >> [2]+ Done dd if=/dev/zero of=/dev/md127 count=100000 bs=1M > > No. This is not a parallel IO test. > > ... >> To address #3 use FIO or a similar testing tool that can issue IOs in >> parallel. With SSD based storage you will never reach maximum >> throughput with a serial data stream. > > This is a parallel IO test, one command line: > > ~# fio --directory=/dev/md126 --zero_buffers --numjobs=16 > --group_reporting --blocksize=64k --ioengine=libaio --iodepth=16 > --direct=1 --size=64g --name=read --rw=read --stonewall --name=write > --rw=write --stonewall Correction. The --size value is per job, not per fio run. We use 16 jobs in parallel to maximize the hardware throughput. So use --size=4g for 64GB total written in the test. If you use --size=64g as I stated above you'll write 1TB total in the test, and it will take forever to finish. With --size=4g the read test should take ~30 seconds and the write test ~40s, not including the fio initialization time. > Normally this targets a filesystem, not a raw block device. This > command line should work for a raw md device. -- Stan -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html