On 13/10/10 21:38, Neil Whelchel wrote:
So with our conclusion pile so far we can deduce that if we were to keep all of our data in two column tables (one to link them together, and the other to store one column of data), we stand a much better chance of making the entire table to be counted fit in RAM, so we simply apply the WHERE clause to a specific table as opposed to a column within a wider table... This seems to defeat the entire goal of the relational database...
That is a bit excessive I think - a more reasonable conclusion to draw is that tables bigger than ram will drop to IO max speed to scan, rather than DIMM max speed...
There are things you can do to radically improve IO throughput - e.g a pair of AMC or ARECA 12 slot RAID cards setup RAID 10 and tuned properly should give you a max sequential throughput of something like 12*100 MB/s = 1.2 GB/s. So your example table (estimated at 2GB) so be able to be counted by Postgres in about 3-4 seconds...
This assumes a more capable machine than you are testing on I suspect. Cheers Mark -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance