On 05/21/2013 06:18 AM, Jeison Bedoya wrote:
Hi people, i have a database with 400GB running in a server with 128Gb
RAM, and 32 cores, and storage over SAN with fiberchannel, the problem
is when i go to do a backup whit pg_dumpall take a lot of 5 hours,
next i do a restore and take a lot of 17 hours, that is a normal time
for that process in that machine? or i can do something to optimize
the process of backup/restore.
It would help to know what you wish to solve. I.e. setting up a test/dev
server, testing disaster-recovery, deploying to a new server, etc. Also,
are you dumping to a file then restoring from a file or dumping to a
pipe into the restore?
If you use the custom format in pg_dump *and* are dumping to a file
*and* restoring via pg_restore, you can set the -j flag to somewhat
fewer than the number of cores (though at 128 cores I can't say where
the sweet spot might be) to allow pg_restore to run things like index
recreation in parallel to help your restore speed.
You can also *temporarily* disable fsync while rebuilding the database -
just be sure to turn it back on afterward.
Copying the files is not the recommended method for backups but may work
for certain cases. One is when you can shut down the database so the
whole directory is quiescent while you copy the files. Also, depending
on your SAN features, you *might* be able to do a snapshot of the
running PostgreSQL data directory and use that.
Postgres version 9.2.2 ...
...has a nasty security issue. Upgrade. Now.
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance