Re: Linux Raid performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Drew wrote:
The RAID array is 16 devices attached to a port expander in turn attached to
a SAS controller. At a most simplistic level, surely the SAS controller has
overhead attached to which drive is being addressed.

Don't forget that with a port expander is still limited to the bus
speed of the link between it and the HBA. It doesn't matter how many
drives you hang off an expander, you will still never exceed the rated
speed (1.5/3/6Gb/s) for that one port on the HBA.

If it is a SAS connect to the RAID array, they are often quad channel cables (12Gbits/second-ie 4x3Gbps), this is what is on the external connection of the card, not a single channel sas/sata like the lower end stuff, and most of the more expensive expanders and raid cabinents use that.

Still the entire 16disk setup will be limited to be less that whatever the interconnect is, and if you start piling more than 16 disks onto it things get pretty messy pretty fast.


Say you have four drives behind an expander on a 6Gb/s link. Each
drive in the array could still bonnie++ at the full 6Gb/s but once you
try to write to all four drives simultaneously (RAID-5/6), the best
you can get out of each is around 1.5Gb/s.

That's why I don't use expanders except for archival SATA drives,
which AFAIK only go one expander deep. The performance penalty isn't
worth the cost savings in my books.



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux