On Thu, May 24, 2012 at 12:06 AM, Hugo <Nabble> <hugo.tech@xxxxxxxxx> wrote:
How many total relations do you have? I don't know if there is a limit to the number of schemas, but I suspect when you went from one schema to 20,000 schemas, you also went from N relations to 20000*N relations.
Somewhere between 100,000 and 1 million total relations, Postgres starts to have trouble. See this thread:
http://permalink.gmane.org/gmane.comp.db.postgresql.performance/33254
(Why is it that Google can't find these archives on postgresql.org?)
Craig
Hi everyone,
We have a production database (postgresql 9.0) with more than 20,000 schemas
and 40Gb size. In the past we had all that information in just one schema
and pg_dump used to work just fine (2-3 hours to dump everything). Then we
decided to split the database into schemas, which makes a lot of sense for
the kind of information we store and the plans we have for the future. The
problem now is that pg_dump takes forever to finish (more than 24 hours) and
we just can't have consistent daily backups like we had in the past. When I
try to dump just one schema with almost nothing in it, it takes 12 minutes.
When I try to dump a big schema with lots of information, it takes 14
minutes. So pg_dump is clearly lost in the middle of so many schemas. The
load on the machine is low (it is a hot standby replica db) and we have good
configurations for memory, cache, shared_buffers and everything else. The
performance of the database itself is good, it is only pg_dump that is
inefficient for the task. I have found an old discussion back in 2007 that
seems to be quite related to this problem:
http://postgresql.1045698.n5.nabble.com/5-minutes-to-pg-dump-nothing-tp1888814.html
It seems that pg_dump hasn't been tested with a huge number of schemas like
that. Does anyone have a solution or suggestions? Do you know if there are
patches specific for this case?
How many total relations do you have? I don't know if there is a limit to the number of schemas, but I suspect when you went from one schema to 20,000 schemas, you also went from N relations to 20000*N relations.
Somewhere between 100,000 and 1 million total relations, Postgres starts to have trouble. See this thread:
http://permalink.gmane.org/gmane.comp.db.postgresql.performance/33254
(Why is it that Google can't find these archives on postgresql.org?)
Craig
Thanks in advance,
Hugo
-----
Official Nabble Administrator - we never ask for passwords.
--
View this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance