thank you for your advise. now I have test multi-thread patch, the single raid5 performance improve 30%. but I have another problem,when write on single raid,write performance is approx 1.1GB/s root@host0:/sys/block/md126/md# dd if=/dev/zero of=/dev/md126 count=100000 bs=1M 100000+0 records in 100000+0 records out 104857600000 bytes (105 GB) copied, 94.2039 s, 1.1 GB/s when write on two raid,write write performance is approx 0.96+0.84=1.8GB/s, theory is 2.2GB/s,why have 400M/s performance loss? root@host0:/sys/block/md126/md# 100000+0 records in 100000+0 records out 104857600000 bytes (105 GB) copied, 108.56 s, 966 MB/s 100000+0 records in 100000+0 records out 104857600000 bytes (105 GB) copied, 123.511 s, 849 MB/s [1]- Done dd if=/dev/zero of=/dev/md126 count=100000 bs=1M [2]+ Done dd if=/dev/zero of=/dev/md127 count=100000 bs=1M root@host0:/sys/block/md126/md# ------------------------------------------------------------------ 发件人:Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> 发送时间:2013年11月28日(星期四) 12:41 收件人:lilofile <lilofile@xxxxxxxxxx>; Linux RAID <linux-raid@xxxxxxxxxxxxxxx> 主 题:Re: 答复:md raid5 performace 6x SSD RAID5 On 11/27/2013 7:51 AM, lilofile wrote: > additional: CPU: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz > memory:32GB ... > when I create raid5 which use six SSD(sTEC s840), > when the stripe_cache_size is set 4096. > root@host1:/sys/block/md126/md# cat /proc/mdstat > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] > md126 : active raid5 sdg[6] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] > 3906404480 blocks super 1.2 level 5, 128k chunk, algorithm 2 [6/6] [UUUUUU] > > the single ssd read/write performance : > > root@host1:~# dd if=/dev/sdb of=/dev/zero count=100000 bs=1M > ^C76120+0 records in > 76119+0 records out > 79816556544 bytes (80 GB) copied, 208.278 s, 383 MB/s > > root@host1:~# dd of=/dev/sdb if=/dev/zero count=100000 bs=1M > 100000+0 records in > 100000+0 records out > 104857600000 bytes (105 GB) copied, 232.943 s, 450 MB/s > > the raid read and write performance is approx 1.8GB/s read and 1.1GB/s write performance > root@sc0:/sys/block/md126/md# dd if=/dev/zero of=/dev/md126 count=100000 bs=1M > 100000+0 records in > 100000+0 records out > 104857600000 bytes (105 GB) copied, 94.2039 s, 1.1 GB/s > > > root@sc0:/sys/block/md126/md# dd of=/dev/zero if=/dev/md126 count=100000 bs=1M > 100000+0 records in > 100000+0 records out > 104857600000 bytes (105 GB) copied, 59.5551 s, 1.8 GB/s > > why the performance is so bad? especially the write performace. There are 3 things that could be, or are, limiting performance here. 1. The RAID5 write thread peaks one CPU core as it is single threaded 2. A 4KB stripe cache is too small for 6 SSDs, try 8KB 3. dd issues IOs serially and will thus never saturate the hardware #1 will eventually be addressed with a multi-thread patch to the various RAID drivers including RAID5. There is no workaround at this time. To address #3 use FIO or a similar testing tool that can issue IOs in parallel. With SSD based storage you will never reach maximum throughput with a serial data stream. -- Stan -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html