Re: Linux Raid performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Happy Easter!!!

So, 550-600MB/s is the best we have seen with Linux raid using 16-24 SAS drives.

Not sure if its appropriate to ask on this list - has someone seen
better numbers with non-linux raid stack? Perhaps freebsd/lustre..

Thanks for your time!

On Sun, Apr 4, 2010 at 8:00 AM, MRK <mrk@xxxxxxxxxxxxx> wrote:
> Richard Scobie wrote:
>>
>> MRK wrote:
>>
>>> I spent some time trying to optimize it but that was the best I could
>>> get. Anyway both my benchmark and Richard's one imply a very significant
>>> bottleneck somehwere.
>>
>> This bottleneck is the SAS controller, at least in my case. I did the same
>> math regarding streaming performance of one drive times number of drive and
>> wondered where the shortfall was, after tests showed I could only streaming
>> read at 850MB/s on the same array.
>>
>> A query to an LSI engineer got the following response, which basically
>> boils down to "you get what you pay for" - SAS vs SATA drives.
>>
>> "Yes, you're at the "practical" limit.
>>
>> With that setup and SAS disks, you will exceed 1200 MB/s.  Could go
>> higher than 1,400 MB/s given the right server chipset.
>>
>> However with SATA disks, and the way they break up data transfers, 815
>> to 850 MB/s is the best you can do.
>>
>> Under SATA, there are multiple connections per I/O request.
>>  * Command Initiator -> HDD
>>  * DMA Setup  Initiator -> HDD
>>  * DMA Activate  HDD -> Initiator
>>  * Data   HDD -> Initiator
>>  * Status    HDD -> Initiator
>> And there is little ability with typical SATA disks to combine traffic
>> from different I/Os on the same connection.  So you get lots of
>> individual connections being made, used, & broken.
>>
>> Contrast that with SAS which has typically 2 connections per I/O, and
>> will combine traffic from more than 1 I/O per connection.  It uses the
>> SAS links much more efficiently."
>
> Firstly: Happy Easter!  :-)
>
> Secondly:
>
> If this is true then one won't achieve higher speeds even on RAID-0. If
> anybody can test this... I cannot right now
>
> I am a bit surprised though. The SATA "link" is one per drive, so if 1 drive
> is able to do 90MB/sec, N drives on N cables should do Nx90MB/sec.
> If this is not so, then the chipset of the controller must be the
> bottleneck.
> If this is so, the newer LSI controllers at 6.0gbit/sec could be able to do
> better (they supposedly have a faster chip). Also maybe one could buy more
> controller cards and divide drives among those. These two workarounds would
> still be cheaper than SAS drives.
>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux