Search Postgresql Archives

pg_dump to a remote server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The dump file will be more than 1TB, and there's not enough disk space on the current system for the dump file.

Thus, how can I send the pg_dump file directly to the new server while the pg_dump command is running?  NFS is one method, but are there others (netcat, rsync)?  Since it's within the same company, encryption is not required.

Or would it be better to install both 8.4 and 9.6 on the new server (can I even install 8.4 on RHEL 6.9?), rsync the live database across and then set up log shipping, and when it's time to cut over, do an in-place pg_upgrade?

(Because this is a batch system, we can apply the data input files to bring the new database up to "equality" with the 8.4 production system.)

Thanks

--
Angular momentum makes the world go 'round.




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux