On 07/23/2018 12:23 AM, Ron wrote:
Hi,
We've got an old (v8.4.17, thus no parallel backups) 2.9TB database that
needs to be migrated to a new data center and then restored to v9.6.9.
The database has many large tables full of bytea columns containing pdf
images, and so the dump file is going to be more than 2x larger than the
existing data/base...
The command is:
$ pg_dump -v -Z0 -Fc $DB --file=${TARGET}/${DATE}_${DB}.dump 2>
${DATE}_${DB}.log
Using -Z0 because pdf files are already compressed.
This is not consistent with your statement that the dump file will
double in size over database size. That would imply the data is being
decompressed on dumping. I would test -Z(>0) on one of the tables with
PDF's to determine whether it would help or not.
Because of an intricate web of FK constraints and partitioned tables,
the customer doesn't trust a set of "partitioned" backups using --table=
and regular expressions (the names of those big tables all have the year
in them), and so am stuck with a single-threaded backup.
Are there any config file elements that I can tweak (extra points for
not having to restart postgres) to make it run faster, or deeper
knowledge of how pg_restore works so that I could convince them to let
me do the partitioned backups?
Lastly, is there any way to not make the backups so large (maybe by
using the --binary-upgrade option, even though the man page says,
"in-place upgrades only")?
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx