答复:答复:答复:two raid5 performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



really the chunksizes of two raid is different in this test,but I use same chunksize in two raid already, the result is the same.
the question is why write performance of  each single raid5 can reached to 1GB/s(dd),dd write two raid5 can only reach to 1.4GB/s,
Theoretically total performance can reached to 2GB/s,loss 30% .
is any contention in soft raid5?


------------------------------------------------------------------
发件人:Tommy Apel <tommyapeldk@xxxxxxxxx>
发送时间:2013年12月16日(星期一) 22:02
收件人:lilofile <lilofile@xxxxxxxxxx>
抄 送:linux-raid <linux-raid@xxxxxxxxxxxxxxx>
主 题:Re: 答复:答复:two raid5 performance

First off you chunksizes are different on the two volumes and secondly
if you're looking for throughput performance move them up to like 512K
or maybe more

2013/12/16 lilofile <lilofile@xxxxxxxxxx>:
> the result of mdadm -D is as follows:
>
> root@host0:~# mdadm  -D /dev/md126
> /dev/md126:
>         Version : 1.2
>   Creation Time : Sat Dec  7 16:26:04 2013
>      Raid Level : raid5
>      Array Size : 1171499840 (1117.23 GiB 1199.62 GB)
>   Used Dev Size : 234299968 (223.45 GiB 239.92 GB)
>    Raid Devices : 6
>   Total Devices : 6
>     Persistence : Superblock is persistent
>
>   Intent Bitmap : Internal
>
>     Update Time : Fri Dec 13 22:51:04 2013
>           State : active
>  Active Devices : 6
> Working Devices : 6
>  Failed Devices : 0
>   Spare Devices : 0
>
>          Layout : left-symmetric
>      Chunk Size : 64K
>
>            Name : host0:md126
>            UUID : aaa3075d:f25e3bd0:9dd347c5:fb58192f
>          Events : 158
>
>     Number   Major   Minor   RaidDevice State
>        0      65      112        0      active sync   /dev/sdx
>        1      65       96        1      active sync   /dev/sdw
>        2      65       80        2      active sync   /dev/sdv
>        3      65       64        3      active sync   /dev/sdu
>        4      65       48        4      active sync   /dev/sdt
>        6      65       32        5      active sync   /dev/sds
> root@host0:~#
>
>
> root@host0:~# mdadm  -D /dev/md129
> /dev/md129:
>         Version : 1.2
>   Creation Time : Sun Dec  8 15:59:11 2013
>      Raid Level : raid5
>      Array Size : 1171498880 (1117.23 GiB 1199.61 GB)
>   Used Dev Size : 234299776 (223.45 GiB 239.92 GB)
>    Raid Devices : 6
>   Total Devices : 6
>     Persistence : Superblock is persistent
>
>     Update Time : Mon Dec 16 21:50:45 2013
>           State : clean
>  Active Devices : 6
> Working Devices : 6
>  Failed Devices : 0
>   Spare Devices : 0
>
>          Layout : left-symmetric
>      Chunk Size : 128K
>
>            Name : host0:md129  (local to host sc0)
>            UUID : b80616a9:49540979:90b20aa6:1b22f67b
>          Events : 31
>
>     Number   Major   Minor   RaidDevice State
>        0      65       16        0      active sync   /dev/sdr
>        1      65        0        1      active sync   /dev/sdq
>        2       8      224        2      active sync   /dev/sdo
>        3       8      240        3      active sync   /dev/sdp
>        4       8      208        4      active sync   /dev/sdn
>        5       8      192        5      active sync   /dev/sdm
> root@host0:~#
>
>
> ------------------------------------------------------------------
> 发件人:Tommy Apel <tommyapeldk@xxxxxxxxx>
> 发送时间:2013年12月16日(星期一) 21:32
> 收件人:lilofile <lilofile@xxxxxxxxxx>
> 抄 送:linux-raid <linux-raid@xxxxxxxxxxxxxxx>
> 主 题:Re: 答复:two raid5 performance
>
> Please give us the output of mdadm -D for each array aswell, and just
> in case, also the model of the SSD's in use
>
> 2013/12/16 Dag Nygren <dag@xxxxxxxxxx>:
>> On Monday 16 December 2013 15:57:42 lilofile wrote:
>>> the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
>>
>> I/O bus to the card?
>>
>> Best
>> Dag
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>
> --
>
> /Tommy



-- 

/Tommy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux