On Mon, May 17, 2010 at 12:04 AM, Jayadevan M <Jayadevan.Maymala@xxxxxxxxxx> wrote: > Hello all, > I was testing how much time a pg_dump backup would take to get restored. > Initially, I tried it with psql (on a backup taken with pg_dumpall). It took > me about one hour. I felt that I should target for a recovery time of 15 > minutes to half an hour. So I went through the blogs/documentation etc and > switched to pg_dump and pg_restore. I tested only the database with the > maximum volume of data (about 1.5 GB). With > pg_restore -U postgres -v -d PROFICIENT --clean -Fc proficient.dmp > it took about 45 minutes. I tried it with > pg_restore -U postgres -j8 -v -d PROFICIENT --clean -Fc proficient.dmp > Not much improvement there either. Have I missed something or 1.5 GB data on > a machine with the following configuration will take about 45 minutes? There > is nothing else running on the machine consuming memory or CPU. Out of 300 > odd tables, about 10 tables have millions of records, rest are all having a > few thousand records at most. > > Here are the specs ( a pc class machine)- > > PostgreSQL 8.4.3 on i686-pc-linux-gnu > CentOS release 5.2 > Intel(R) Pentium(R) D CPU 2.80GHz > 2 GB RAM > Storage is local disk. > > Postgresql parameters (what I felt are relevant) - > max_connections = 100 > shared_buffers = 64MB > work_mem = 16MB > maintenance_work_mem = 16MB > synchronous_commit on Do the big tables have lots of indexes? If so, you should raise maintenance_work_mem. Peter -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance