Erik Jones <erik@xxxxxxxxxx> writes: > Sure. I've attached an archive with the full memory context and > error for each. Note that I'm already 99% sure that this is due to > our exorbitantly large relation set which is why I think pg_dump's > catalog queries are running out of work_mem (currently at just over > 32MB). work_mem doesn't seem to be your problem --- what it looks like to me is that it's CacheMemoryContext and subsidiary contexts that's growing to unreasonable size, no doubt because of all the relcache entries for all those tables pg_dump has to touch. I'm wondering a bit why CacheMemoryContext has so much free space in it, but even if it had none you'd still be at risk. There isn't any provision in the current backend to limit the number of relcache entries, so eventually you're gonna run out of space if you have enough tables. Even so, you seem to be well under 1Gb in the server process. How much RAM is in the machine? Are you sure the postmaster is being launched under ulimit unlimited? If it's a 32-bit machine, maybe you need to back off shared_buffers or other shmem size parameters so that more address space is left for backend private memory. In the long run you probably ought to rethink having so many tables; that doesn't sound like great database design to me. A possible stopgap answer is to be selective about how many tables get dumped per pg_dump run, though I'm worried about the risk of leaving some out entirely. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend