We dont use 10 ports, we use 8 ports > 36 port expander. The 8 ports act as a single wide port. We are hitting a performance limit of circa 1.6 - 1.9GBsec regardless of number of drives, so it max's at around 8 / 9 drives (with 15K). RAID6 around 900MBsec i recall. We expect more with emerging expanders. We were hoping to use DM MPIO to increase performance using multiple cards and paths, but MPIO at best matches performance of a single card, most likely pulls it down.... but this is a different topic i guess. XFS in the past has often increased performance - not allows on simple sequential writes, but FS's are a lot better at intelligently caching data... so I am very keen to help in whatever way i can to resolve the FS > MD performance problem. Thanks again.... Mark On Tue, Oct 13, 2009 at 3:30 PM, Asdo <asdo@xxxxxxxxxxxxx> wrote: > >> On Tue, Oct 13, 2009 at 2:12 PM, mark delfman >> <markdelfman@xxxxxxxxxxxxxx> wrote: >> >>> >>> We upgrading mainly because of support for the emerging LSI SAS2 cards >>> (which we are beta testing now) >>> > > What is this LSI SAS2 card you have with 10+ ports? The only 10+ ports LSI > card I see is the 84016E and it is a SAS1. > > You say the driver for such card is included in the vanilla kernel at > 2.6.30? That would be very nice... I grepped the 2.6.31 kernel source for > LSI cards but I can't find device strings such as 84016E ... > > Thank you > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html