Search Postgresql Archives

Improving pg_dump performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that needs to be migrated to a new data center and then restored to v9.6.9.

The database has many large tables full of bytea columns containing pdf images, and so the dump file is going to be more than 2x larger than the existing data/base...


The command is:
$ pg_dump -v -Z0 -Fc $DB --file=${TARGET}/${DATE}_${DB}.dump 2> ${DATE}_${DB}.log

Using -Z0 because pdf files are already compressed.

Because of an intricate web of FK constraints and partitioned tables, the customer doesn't trust a set of "partitioned" backups using --table= and regular expressions (the names of those big tables all have the year in them), and so am stuck with a single-threaded backup.

Are there any config file elements that I can tweak (extra points for not having to restart postgres) to make it run faster, or deeper knowledge of how pg_restore works so that I could convince them to let me do the partitioned backups?

Lastly, is there any way to not make the backups so large (maybe by using the --binary-upgrade option, even though the man page says, "in-place upgrades only")?

--
Angular momentum makes the world go 'round.




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux