>> We recently fixed a couple of O(N^2) loops in pg_dump, but those covered >> extremely specific cases that might or might not have anything to do >> with what you're seeing. The complainant was extremely helpful about >> tracking down the problems: >> http://archives.postgresql.org/pgsql-general/2012-03/msg00957.php >> http://archives.postgresql.org/pgsql-committers/2012-03/msg00225.php >> http://archives.postgresql.org/pgsql-committers/2012-03/msg00230.php > > I'm wondering if these fixes (or today's commit) include the case for > a database has ~100 thounsands of tables, indexes. One of my customers > has had troubles with pg_dump for the database, it takes over 10 > hours. So I did qucik test with old PostgreSQL 9.0.2 and current (as of commit 2755abf386e6572bad15cb6a032e504ad32308cc). In a fresh initdb-ed database I created 100,000 tables, and each has two integer attributes, one of them is a primary key. Creating tables were resonably fast as expected (18-20 minutes). This created a 1.4GB database cluster. pg_dump dbname >/dev/null took 188 minutes on 9.0.2, which was pretty long time as the customer complained. Now what was current? Well it took 125 minutes. Ps showed that most of time was spent in backend. Below is the script to create tables. cnt=100000 while [ $cnt -gt 0 ] do psql -e -p 5432 -c "create table t$cnt(i int primary key, j int);" test cnt=`expr $cnt - 1` done p.s. You need to increate max_locks_per_transaction before running pg_dump (I raised to 640 in my case). -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance