MRK wrote:
I spent some time trying to optimize it but that was the best I could get. Anyway both my benchmark and Richard's one imply a very significant bottleneck somehwere.
This bottleneck is the SAS controller, at least in my case. I did the same math regarding streaming performance of one drive times number of drive and wondered where the shortfall was, after tests showed I could only streaming read at 850MB/s on the same array.
A query to an LSI engineer got the following response, which basically boils down to "you get what you pay for" - SAS vs SATA drives.
"Yes, you're at the "practical" limit. With that setup and SAS disks, you will exceed 1200 MB/s. Could go higher than 1,400 MB/s given the right server chipset. However with SATA disks, and the way they break up data transfers, 815 to 850 MB/s is the best you can do. Under SATA, there are multiple connections per I/O request. * Command Initiator -> HDD * DMA Setup Initiator -> HDD * DMA Activate HDD -> Initiator * Data HDD -> Initiator * Status HDD -> Initiator And there is little ability with typical SATA disks to combine traffic from different I/Os on the same connection. So you get lots of individual connections being made, used, & broken. Contrast that with SAS which has typically 2 connections per I/O, and will combine traffic from more than 1 I/O per connection. It uses the SAS links much more efficiently." Regards, Richard -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html