Hi I am volume testing a db model that consists of a paritioned tables. The db has been running for a week and a half now and has built up to contain approx 55000 partition tables of 18000 rows each. The root table therefore contains about 1 billion rows. When I try to do a "select count(*)" of the root table, it does some work for a while, perhaps 5-10 minutes and the aborts with ERROR: out of memory DETAIL: Failed on request of size 130. Does anybody have any suggestion as to which parameter I should tune to give it more memory to be able to perform queries on the root table? regards thomas The last parts of the db log is the following, I dont think anything other than the last 2 lines are relevant. pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used pg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks); 768 used pg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 768 used MdSmgr: 4186112 total in 9 blocks; 911096 free (4 chunks); 3275016 used LockTable (locallock hash): 2088960 total in 8 blocks; 418784 free (25 chunks); 1670176 used Timezones: 47592 total in 2 blocks; 5968 free (0 chunks); 41624 used ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used ERROR: out of memory DETAIL: Failed on request of size 130. ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org/