Re: 答复:答复:two raid5 performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/16/2013 7:13 AM, lilofile wrote:
> mpt2sas 6Gb/s, PCIE 2.0,the card is four port,so theoretical ratio can reach to 2.4Gb/s,it is not bottleneck. I test read two raid5 using dd,
> such as  dd if=/dev/md0 of=/dev/zero 1M
>              dd if=/dev/md1 of=/dev/zero 1M 
> the total read bandwidth  can reach to 2.3GB/s,so I/O bus is not a problem.

Why are you using dd again?  I explained to you in your previous thread
why dd will never saturate your SSDs with write IO.  Use FIO.  If you
don't know how to make FIO do what you want then ask.

BTW, don't start a new thread for the same issue.  Your last thread and
this thread deal with the same RAID5 on STEC SSDs issue.  By starting a
new thread everyone loses context and history, which are critically
important when keeping track of performance tests and configurations.

I can't help but point out some irony here.  You're concerned with
throughput, yet you're connecting 12 SSDs, ~500 MB/s each, to an SAS
backplane which connects via 4-lane 6G SAS to the HBA.  With RAID5
that's 5 GB/s SAS hardware throughput funneled through a 2.4 GB/s
SFF-8088 cable.  So once you test properly, and see the write throughput
you already have, you'll see you're limited to half the hardware
throughput by your cabling/backplane.  For reads you already are seeing
this limit.

-- 
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux