Hi Ron, I have some pg_dump test result, for reference only 😊 -- [ENV] Intel(R) Core(TM) i5-4250U CPU @ 1.30GHz | SSD 120GB | 8G memory (PostgreSQL) 9.6.8 -- [DATA] my database has 7.2GB of random data: postgres=# select pg_size_pretty(pg_database_size('postgres')); pg_size_pretty ---------------- 7201 MB (1 row) -- [Test Results] command | export_time | output_size -------------------------------------------------------------+-----------------+------------------ pg_dump postgres > outfile.sql | 16m23s | 6.3 GB pg_dump postgres | gzip > outfile.gz | 5m27s | 2.4 GB pg_dump -Fc postgres > outfile.dump | 5m33s | 2.4 GB pg_dump -Fc -Z 9 postgres > outfile.dump | 11m59s | 2.4 GB pg_dump -Ft postgres > outfile.dump | 2m43s | 6.3 GB pg_dump -Fd postgres -f dumpdir | 5m17s | 2.4 GB pg_dump -Fd -j 4 postgres -f dumpdir | 2m50s | 2.4 GB (7 rows) -- The smaller the amount of data transmitted over the network, the better. You could try compressed export method like gzip, -Fc, -Ft, -Fd -j 4(faster). -- Jack Gao jackgo73@xxxxxxxxxxx > -----Original Message----- > From: Ron <ronljohnsonjr@xxxxxxxxx> > Sent: Tuesday, April 17, 2018 9:44 AM > To: Adrian Klaver <adrian.klaver@xxxxxxxxxxx>; pgsql-general <pgsql- > general@xxxxxxxxxxxxxx> > Subject: Re: pg_dump to a remote server > > > > On 04/16/2018 07:18 PM, Adrian Klaver wrote: > > On 04/16/2018 04:58 PM, Ron wrote: > >> We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The > dump > >> file will be more than 1TB, and there's not enough disk space on the > >> current system for the dump file. > >> > >> Thus, how can I send the pg_dump file directly to the new server while > >> the pg_dump command is running? NFS is one method, but are there > others > >> (netcat, rsync)? Since it's within the same company, encryption is not > >> required. > > > > Maybe?: > > > > pg_dump -d test -U postgres -Fc | ssh aklaver@arkansas 'cat > test_cat.out' > > That looks promising. I could then "pg_restore -jX". > > -- > Angular momentum makes the world go 'round.