On Wed, Jul 02, 2008 at 02:45:46PM -0500, Matt Garman wrote: > On Wed, Jul 02, 2008 at 01:08:04PM -0500, David Lethe wrote: > > Everything is a potential bottleneck. As I am under NDA with most > > of the controller vendors, then I can not provide specifics, but > > suffice to say that certain cards with certain chipsets will max > > out at well under published speeds. Heck, you could attach > > solid-state disks with random I/O access time in the nanosecond > > range and still only get 150MB/sec out of certain controllers, > > even on a PCIe X 16 bus. > > Short of signing an NDA, how would one go about determining which > chipsets are least likely to be a bottleneck? I'm interested in > building an NFS server with a Linux software RAID-5 data store. To > me, that means my I/O subsystem should be as fast and capable as > possible. > > For example, looking at the block diagram [1] of Intel's P45/ICH10 > chipset [2], it appears that the link between the north and south > bridges is only 2 GB/s. I would think that any raid level that > requires the CPU (e.g. parity calculations) would clog that link > fairly quickly, at least if large block transfers are taking place. > And then I wonder what impact that has on the performance of the > NIC(s) (I don't know how much a NIC has to talk to the CPU). 2 GB/s seems adequate. Not many raids are in this class. Theoretically you would need something like more than 20 disks to get this bandwidth, and normally you only have 4 to 8 disks attachable to IO controllers on the mobo. Parity calculations are done by the cpu on the ram, and should not touch the northbridge - southbridge internal bus in vanilla motherboards. best regards keld -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html