Search Postgresql Archives

Re: pg_basebackup cannot compress to STDOUNT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/8/2020 11:51 PM, Paul Förster wrote:

Hi Admin,

On 08. May, 2020, at 21:31, Support <admin@xxxxxxxxxxxx> wrote:
2) Command run?
ssh postgres@nodeXXX "pg_basebackup -h /run/postgresql -Ft -D- | pigz -c -p2 " | pigz -cd -p2 | tar -xf- -C /usr/local/pgsql/data
I don't get it, sorry. Do I understand you correctly here that you want an online backup or a *remotely* running PostgreSQL instance on your local machine?

If so, why not just let pg_basebackup connect remotely and let it do its magic? Something like this:

$ mkdir -p /usr/local/pgsql/data
$ cd /usr/local/pgsql/data
$ pg_basebackup -D /run/postgresql -Fp -P -v -h nodeXXX -p 5432 -U replicator
$ pg_ctl start

You'd have to have a role with replication privs or superuser and you'd have to adapt the port of course.

No need to take care of any WALs manually. It is all taken care of by pg_basebackup. The only real drawback is that if you have tablespaces, you'd have to create all directories of the tablespaces beforehand, which is why we removed them again after initially having tried the feature.

That's basically, how I create async replicas on out site, which is why I additionally add -R to the above command.

Cheers,
Paul

The trick of my command above is to get the transfer faster in one compressed file going through the network.





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux