The maximum throughput you'll get is the PCI bus's speed. Make sure to note which version your server has. The silicon image controller will be your bottleneck here, but I don't have any numbers to say how much of a loss you'll be at. You'd have to search around for those who already benchmarked their systems, or buy/request a card to test it out. If you do get a card and test it, make sure that you report back to us and update the wiki: http://linux-raid.osdl.org/index.php/Performance On Thu, Sep 10, 2009 at 9:35 PM, Drew<drew.kay@xxxxxxxxx> wrote: >> If you're looking at port multipliers, you need to find PCI-Express >> modules if you want them to be fast. The PCI ones are gonna be very >> slow when you have more than 2 disks per card. > > I'm definitely going to use the PCIX/PCIe slots for the Host Adapter. > > What I'm wondering is if I use a HBA and Port Multiplier that support > FIS based switching, say a Sil 3124 & 3726, how much of a loss in data > transfer rate can I expect from the RAID array built off the PM as > opposed to each disk plugged in separately? > > An example configuration I'm looking at is a Sil3124 4 port HBA with > three of the ports having Sil3726 5to1 PMs attached. Each PM then has > four disks hung off the PM. If I create a RAID5 array for example on > each PM, what sort of speed degradation would I be looking at compared > to making a RAID5 off just the 3124? > > -- > Drew > > "Nothing in life is to be feared. It is only to be understood." > --Marie Curie > -- Majed B. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html