Scott Carey wrote: > You probably don’t want a single array with more than 32 drives anyway, > its almost always better to start carving out chunks and using software > raid 0 or 1 on top of that for various reasons. I wouldn’t put more than > 16 drives in one array on any of these RAID cards, they’re just not > optimized for really big arrays and tend to fade between 6 to 16 in one > array, depending on the quality. This is what I'm looking at now. The server I'm working on at the moment currently has a PERC6/e and 3xMD1000s which needs to be tested in a few setups. I need to code a benchmarker yet (I haven't found one yet that can come close to replicating our DB usage patterns), but I intend to try: 1. 3x h/w RAID10 (one per shelf), sofware RAID0 2. lots x h/w RAID1, software RAID0 if the PERC will let me create enough arrays 3. Pure s/w RAID10 if I can convince the PERC to let the OS see the disks 4. 2x h/w RAID30, software RAID0 I'm not holding much hope out for the last one :) I'm just glad work on a rewrite of my inherited backend systems should start soon; get rid of the multi-TB MySQL hell and move to a distributed PG setup on dirt cheap Dell R200s/blades > You can do direct-attached storage to 100+ drives or more if you want. > The price and manageability cost go up a lot if it gets too big > however. Having global hot spare drives is critical. Not that the cost > of using SAN’s and such is low... SAS expanders have made DAS with > large arrays very accessible though. For large storage arrays (RAID60 or similar) you can't beat a RAID controller and disk shelf(s), especially if you keep the raidsets small and use cheap ludicrous capacity SATA disks You just need to be aware that performance doesn't scale well/easily over 1-2 shelves on the things -- -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance