I wrote: > ... I'm wondering a bit why > CacheMemoryContext has so much free space in it, but even if it had none > you'd still be at risk. I tried to reproduce this by creating a whole lot of trivial tables and then pg_dump'ing them: create table t0 (f1 int primary key); insert into t0 values(0); create table t1 (f1 int primary key); insert into t1 values(1); create table t2 (f1 int primary key); insert into t2 values(2); create table t3 (f1 int primary key); insert into t3 values(3); create table t4 (f1 int primary key); insert into t4 values(4); create table t5 (f1 int primary key); insert into t5 values(5); ... (about 17000 tables before I got bored) I looked at the backend memory stats at the end of the pg_dump run and found CacheMemoryContext: 50624864 total in 29 blocks; 608160 free (2 chunks); 50016704 used which compares awfully favorably to your results of CacheMemoryContext: 897715768 total in 129 blocks; 457826000 free (2305222 chunks); 439889768 used CacheMemoryContext: 788990232 total in 147 blocks; 192993824 free (1195074 chunks); 595996408 used Have you really got 200000+ tables? Even if you do, the amount of wasted memory in your runs seems really high. What PG version is this exactly? Can you show us the exact schemas of some representative tables? regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org/