Search Postgresql Archives

Re: pg_dump to a remote server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/16/2018 07:47 PM, Gao Jack wrote:
-----Original Message-----
From: Ron <ronljohnsonjr@xxxxxxxxx>
Sent: Tuesday, April 17, 2018 7:59 AM
To: pgsql-general <pgsql-general@xxxxxxxxxxxxxx>
Subject: pg_dump to a remote server

We're upgrading from v8.4 to 9.6 on a new VM in a different DC.  The dump
file will be more than 1TB, and there's not enough disk space on the current
system for the dump file.

Thus, how can I send the pg_dump file directly to the new server while the
pg_dump command is running?  NFS is one method, but are there others
(netcat, rsync)?  Since it's within the same company, encryption is not
required.

Or would it be better to install both 8.4 and 9.6 on the new server (can I
even install 8.4 on RHEL 6.9?), rsync the live database across and then set
up log shipping, and when it's time to cut over, do an in-place pg_upgrade?

(Because this is a batch system, we can apply the data input files to bring
the new database up to "equality" with the 8.4 production system.)

Thanks

--
Angular momentum makes the world go 'round.
Hi

https://www.postgresql.org/docs/current/static/backup-dump.html#BACKUP-DUMP-RESTORE

...
...

The ability of pg_dump and psql to write to or read from pipes makes it possible to dump a database directly from one server to another, for example:

pg_dump -h host1 dbname | psql -h host2 dbname

But that assumes --format=plain which will send a whole lot of uncompressed text across the wire.

--
Angular momentum makes the world go 'round.




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux