On Thu, 2011-05-19 at 15:10 -0400, Thomas Harold wrote: > On 5/19/2011 8:26 AM, Ed W wrote: > > Hi, following on from a recent thread, can folks with decent > > multi-port HBA adaptors please chime in with some model numbers of > > known decent adaptors please? > > > > The required use is to grow from currently 8 ish drives to perhaps > > 12-24 drives per machine. (It partitions out as: one or more RAID6 > > arrays for data, plus a couple of backup drives) > > > > Ideally I would like a controller with writeback cache and BBU since > > whilst this office machine is likely quite underused, for any > > sensible amount of IO (some of the other machines we might upgrade) > > this seems to give a 10-100x increase in IOPs? For the moment it's > > just a nice to have though > > > > I only intend to use linux software raid, so any onboard raid > > functionality is just a liability. Budget is either low Â100 ish for > > multi-port HBAs without cache, up to Â1000 ish for 16-24 port high > > performance cache controllers: > > I've been using a SuperMicro AOC-SASLP-MV8 (which is on your avoid > list), which reports itself as: > > class: SCSI > bus: PCI > detached: 0 > driver: mvsas > desc: "Marvell Technology Group Ltd. MV64460/64461/64462 System > Controller, Revision B" > vendorId: 11ab > deviceId: 6485 > subVendorId: 15d9 > subDeviceId: 0500 > > I've had it about 6 months at this point with SATA drives hooked up to > it. The issues that I've had with it dropping disks from the 6-disk > RAID-10 array on CentOS 5.5 / 5.6 can probably be traced to: > > Not using enterprise grade SATA disks (as the consumer brand takes too > long to timeout on a bad seek, and mdadm dropped it from the array). > Possibly combined with using a really inexpensive set of removable drive > trays. There were a lot of times after the weekly resync where the > entire array went offline due to multiple drives being dropped. > > Under normal operation it reads/writes to the disks fine and works fine > as a controller. Since this is my own personal server, I have not > tested it with good SAS disks or enterprise SATAs and good drive > enclosures. I've since switched over to just hooking up a pair of RAID1 > arrays to it with a direct connect from the card to the drives (no > removable trays), but I don't have enough time on the new setup to say > that the problem is permanently fixed yet. > > The card is inexpensive, which is a plus. It's a PCIe x4 card. I don't > know whether it would be better behaved with a better class of disks / > enclosures. Its inexpensive and unfortunately you are describing symptoms that belong to the chipset. It is remains firmly on my avoidance list, and i have one... Rudy -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html