Sorry for not responding directly to your question and for changing the subject ... ;-) On 4 March 2011 18:18, Landreville <landreville@xxxxxxxxxxxxxxxxx> wrote: > That is partitioned into about 3000 tables by the switchport_id (FK to > a lookup table), each table has about 30 000 rows currently (a row is > inserted every 5 minutes into each table). Does such partitioning really make sense? My impression is that the biggest benefit with table partitioning is to keep old "inactive" data out of the caches. If so, then it doesn't seem to make much sense to split a table into 3000 active partitions ... unless, maybe, almost all queries goes towards a specific partitioning. According to http://www.postgresql.org/docs/current/interactive/ddl-partitioning.html ... "Query performance can be improved dramatically in certain situations, particularly when most of the heavily accessed rows of the table are in a single partition or a small number of partitions. The partitioning substitutes for leading columns of indexes, reducing index size and making it more likely that the heavily-used parts of the indexes fit in memory." "All constraints on all partitions of the master table are examined during constraint exclusion, so large numbers of partitions are likely to increase query planning time considerably. Partitioning using these techniques will work well with up to perhaps a hundred partitions; don't try to use many thousands of partitions." We have started an archiving project internally in our company since our database is outgrowing the available memory, I'm advocating that we should look into table partitioning before we do the archiving, though it seems to be out of the scope of the project group looking into the archiving. I'm not sure if I should continue nagging about it or forget about it ;-) -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance