We’ve been using pg_dump and pg_restore for many years now and it has always worked well for us. However, we are currently undertaking a major db architecture to partition our tenant data into separate postgres schemas instead of storing all data in the public schema. When attempting to perform a pg_dump with the --schema-only option using our new db architecture it causes the postmaster process to consume massive amounts of memory until it is killed off (6GB+ used by a single postmaster process). Here are the details: PG version: 9.2.5 Total number of postgres schemas: 248 Total number of relations across all schemas: 53,154 If I perform a --schema-only dump on a DB where there are only 11 schemas it succeeds with a dump file that is only 7.5mb in size. All of our tenant schemas have the same exact relations so things should scale linearly when more tenant schemas exist. I should also mention that when performing these dumps there is absolutely no other DB activity occurring. Do you have any ideas why the excessive memory growth on the postmaster service is occurring when doing a pg_dump? |