Re: pg_dump and thousands of schemas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tatsuo Ishii <ishii@xxxxxxxxxxxxxx> writes:
> So I did qucik test with old PostgreSQL 9.0.2 and current (as of
> commit 2755abf386e6572bad15cb6a032e504ad32308cc). In a fresh initdb-ed
> database I created 100,000 tables, and each has two integer
> attributes, one of them is a primary key. Creating tables were
> resonably fast as expected (18-20 minutes). This created a 1.4GB
> database cluster.

> pg_dump dbname >/dev/null took 188 minutes on 9.0.2, which was pretty
> long time as the customer complained. Now what was current?  Well it
> took 125 minutes. Ps showed that most of time was spent in backend.

Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
management in the server.  What I fixed so far on the pg_dump side
should be enough to let partial dumps run at reasonable speed even if
the whole database contains many tables.  But if psql is taking
AccessShareLock on lots of tables, there's still a problem.

			regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux