On Fri, Feb 6, 2009 at 2:04 AM, Matt Burke <mattblists@xxxxxxxxxxxxx> wrote: > Scott Carey wrote: >> You probably don't want a single array with more than 32 drives anyway, >> its almost always better to start carving out chunks and using software >> raid 0 or 1 on top of that for various reasons. I wouldn't put more than >> 16 drives in one array on any of these RAID cards, they're just not >> optimized for really big arrays and tend to fade between 6 to 16 in one >> array, depending on the quality. > > This is what I'm looking at now. The server I'm working on at the moment > currently has a PERC6/e and 3xMD1000s which needs to be tested in a few > setups. I need to code a benchmarker yet (I haven't found one yet that > can come close to replicating our DB usage patterns), but I intend to try: > > 1. 3x h/w RAID10 (one per shelf), sofware RAID0 Should work pretty well. > 2. lots x h/w RAID1, software RAID0 if the PERC will let me create > enough arrays I don't recall the max number arrays. I'm betting it's less than that. > 3. Pure s/w RAID10 if I can convince the PERC to let the OS see the disks Look for JBOD mode. > 4. 2x h/w RAID30, software RAID0 > > I'm not holding much hope out for the last one :) Me either. :) -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance