Maybe I need to rethink ths and take Jeff's advice. I executed this:
pg_basebackup -h [main server's URL] -U postgres -P -v -X s -D /mnt/dbraid/data
8 hours ago, and it is now still at 1%. Should it be that slow? The database in question is about 750 GB, and both servers are on the same GB ethernet network.
Chuck Martin
Avondale Software
On Sun, Dec 30, 2018 at 3:28 PM Jeff Janes <jeff.janes@xxxxxxxxx> wrote:
On Sat, Dec 29, 2018 at 2:05 PM Chuck Martin <clmartin@xxxxxxxxxxxxxxxx> wrote:I thought I knew how to do this, but I apparently don't. I have to set up a new server as a standby for a PG 11.1 server. The main server has a lot more resources than the standby. What I want to do is run pg_basebackup on the main server with the output going to the data directory on the new server.pg_basebackup consumes few resources on the standby anyway in the mode you are running it, other than network and disk. And those are inevitable given your end goal, so if you could do what you want, I think it still wouldn't do what you want.If you really want to spare the network, you can run compression on the server side then decompress on the standby. Currently you can't compress on the server when invoking it on the standby, so:pg_basebackup -D - -Ft -X none |pxz | ssh 10.0.1.16 "tar -xJf - -C /somewhere/data_test"Unfortunately you can't use this along with -X stream or -X fetch.Really I would probably compress to a file and then use scp/rsync, rather the streaming into ssh. That way if ssh gets interrupted, you don't lose all the work.Cheers,Jeff