I have a question about how Postgres makes use of RAID arrays for performance, because we are considering buying a 12-disc array for performance reasons. I'm interested in how the performance scales with the number of discs in the array. Now, I know that for an OLTP workload (in other words, lots of small parallel random accesses), the performance will scale almost with the number of discs. However, I'm more interested in the performance of individual queries, particularly those where Postgres has to do an index scan, which will result in a single query performing lots of random accesses to the disc system. Theoretically, this *can* scale with the number of discs too - my question is does it? Does Postgres issue requests to each random access in turn, waiting for each one to complete before issuing the next request (in which case the performance will not exceed that of a single disc), or does it use some clever asynchronous access method to send a queue of random access requests to the OS that can be distributed among the available discs? Any knowledgable answers or benchmark proof would be appreciated, Matthew -- "To err is human; to really louse things up requires root privileges." -- Alexander Pope, slightly paraphrased ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings