On Thu, Jan 8, 2015 at 11:05 AM, girish R G peetle <giri.anamika0@xxxxxxxxx> wrote:
Hi all,We have a customer who has 1TB database on a production server. They are trying dump based backup of this large database. Following dump command is being used.Dump rate is around 12 GB/hr, which will take lot of time for the backup to complete. This is affecting their production server.Is there is a way to increase dump data rate ?pg_dump -U <User> -Fc -b --port=<Port> '<Db-Name>'PostgreSQL version : 9.2.4Platform : LinuxThanks
Hi Girish,
As the database size is too large and tweaking few database parameters will result towards performance improvement on pg_dump to some extent and pg_dump -j option would have helped if the DB version is PostgreSQL 9.3 but unfortunately your DB version is PostgreSQL 9.2
Another option I would think of to speed up by performing pg_dump in parallel for Database Or schema level however it would increase the load on the server.
The streaming replication option is the best solution for such higher database which will result towards to reduce the higher downtime/data loss and recovery time also helpful to perform physical and logical backups from replica instead of primary to avoid the impact on the primary servers..
Hope this helps.