md raid5 performace 6x SSD RAID5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi:all;
when I create raid5 which use six SSD(sTEC s840),
when the stripe_cache_size is set 4096. 
root@host1:/sys/block/md126/md# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md126 : active raid5 sdg[6] sdf[4] sde[3] sdd[2] sdc[1] sdb[0]
      3906404480 blocks super 1.2 level 5, 128k chunk, algorithm 2 [6/6] [UUUUUU]

the single ssd read/write performance :

root@host1:~# dd if=/dev/sdb of=/dev/zero count=100000 bs=1M
^C76120+0 records in
76119+0 records out
79816556544 bytes (80 GB) copied, 208.278 s, 383 MB/s

root@host1:~# dd of=/dev/sdb if=/dev/zero count=100000 bs=1M
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 232.943 s, 450 MB/s

the raid read and write performance is  approx 1.8GB/s read and 1.1GB/s write performance
root@sc0:/sys/block/md126/md# dd if=/dev/zero of=/dev/md126 count=100000 bs=1M
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 94.2039 s, 1.1 GB/s


root@sc0:/sys/block/md126/md# dd of=/dev/zero if=/dev/md126 count=100000 bs=1M
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB) copied, 59.5551 s, 1.8 GB/s

why the performance is so bad?  especially the write performace.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux