If you want to have no contention for the PCI bus supermicro even makes a board with two south bridge so that the PCI bus is split. However PCIX has about an 800MBps so even two of these cards should not be a problem but the possibility is there if you want it. We actually used those boards but that was to support two dual Myrinet cards in the same box, or about 8Gbps. Two 8 disk sets will only run in the hundreds of mbits/s.
I also recommend you plan for hot spares, get at least two cold standby disks and at least one spare controller.
But...for the cost you probably do not want to roll your own RAID anymore. For about the same money you can get an entirely integrated solution with redundancy in the controllers and a fiberchannel interface to your control node or switch. You can either attach the RAID to a system directly, or purchase a fiberchannel switch and mix and match between multiple disk packs and multiple nodes.
Two that I have worked with include the 4.5TB 16 disk device from http://www.infortrend.com/ and the XServer Raid from apple http://www.apple.com/xserve/raid/ that can handle up to 3.5TB.
The former uses SATA, the latter PATA drivers. Both have hot swap, redundant controllers, large memory disk caches and batteries to keep the cache alive for days as an option and both can hook into any fiberchannel PCI card or fiberchannel switch.
Having used both roll your own RAID in our clusters and the fiberchannel attached IDE Raid devices the fiberchannel style wins hands down for reliability, ease of support, performance (a weak spot in the past) and even cost which is comparable.
Cheers,
Terrence
Joël Bourquard wrote:
Hi,
I would appeciate some advice on SATA controllers.. and since many people appear to be concerned, I'm posting it here.
The arrays will be two software RAID5 of 8 SATA disks each.
The mainboard will probably be a Tyan S2880GNR. In short it provides: - two 64-bit 66MHz PCI-X slots - and two 64-bit 66/133MHz PCI-X slots.
Unless it proves to be foolish, I'll probably put 1 CPU on it.
Now I was about to simply take two Highpoint RocketRAID 1820, but according to some posts here, it seems the guys at Highpoint didn't provide great Linux support after all. Sigh.
Now it seems they added a "Linux OpenBuild driver" there: http://www.highpoint-tech.com/USA/brr1820.htm
Does it use libata ? Is it just a marketing joke ? Has anyone tried it with an actual board ?
It seems Jeff Garzik recommends the Promise or ServerWorks 4- and 8-port boards instead (from http://www.spinics.net/lists/raid/msg03936.html).
Trouble is, I have seen no such 8-port board. The only thing I found on the web was a SuperMicro board with a Marvell (?) chip onboard: http://www.supermicro.com/PRODUCT/Accessories/DAC-SATA-MV8.htm
They seem to have sources for a kernel module there: ftp://ftp.supermicro.com/driver/SATA/DAC_SATA-MV8/LinuxIAL/
Now I'm a bit lost.. I mean, of these two boards (Highpoint and SuperMicro), which one (if any) is supposed to work well in Linux 2.4.20+ or 2.6.0+ ? Which one would work with TCQ enabled ?
My primary concern when making a RAID5 is the read performance. So that would be nice if my controller is among the fastest when using libata. In particular, TCQ support (when it happens) will be important.
Since hardware RAID is not needed, and there are four PCI-X slots, I thought about using four 4-port controllers (or maybe three 6-port controllers).
Now... - are they slower than 3ware (JBOD) and other 8-port controllers ? - do they have onboard SRAM ?
Fortunately, since it is a dedicated machine, pretty much any kernel version can be used if needed.
Sorry for the long post..
Thanks in advance !
Joel
- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html