Re: Linux Raid performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



MRK wrote:

However, if really the total number of IOPS is the bottleneck in SATA
with the 3.0gbit/sec LSI cards, why they don't slow down a single SSD
doing 4k random I/O?

We don't know, as we have no information for one of these SSDs attached to an LSI SAS controller.

I'm not sure this is an apples to apples comparison. The SSD is one device probably connected to a motherboard SATA controller channel.

The RAID array is 16 devices attached to a port expander in turn attached to a SAS controller. At a most simplistic level, surely the SAS controller has overhead attached to which drive is being addressed.


I think if you use dd to read from the 16 underlying devices
simultaneously, independently, and not using MD, (output to /dev/null)
you should obtain the full disk speed of 1.4 GB/sec or so (aggregated).
I think I did this test in the past and I noticed this. Can you try? I
don't have our big disk array in my hands any more :-(

I'll bear it in mind next time I am in a position to try it.

Regards,

Richard
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux