On 1/12/18 1:12 π.μ., David Steele wrote:
On 11/30/18 1:49 PM, Achilleas Mantzios wrote:
On 30/11/18 8:22 μ.μ., Evan Bauer wrote:
Achilleas,
I may be over-simplifying your situation, but have you considered
breaking the problem into two pieces? First backing up to a local
drive and then using rsync to move those files over the unreliable
network to the remote site.
Like other who have responded, I can heartily recommend pgbackrest.
But if the network stinks, then I’d break the problem in two and leave
PostgreSQL out of the network equation.
Pretty good idea, but :
1) those rsync transfers have to be somehow db aware, otherwise lots of
things might break, checksums, order of WALs, etc. There would be the
need to write a whole solution and end up ... getting one of the
established solutions
It's actually perfectly OK to rsync a pgBackRest repository. We've
already done the hard work of interacting with the database and gotten
the backups into a format that can be rsync'd, backed up to tape, with
standard enterprise backups tools, etc.
It is common to backup the pgBackRest repo or individual backups (with
--archive-copy enabled) and we have not seen any issues.
BTW, in this context I expect local means in the same data center, not
on the database host.
Great info!
2) the rsync part would go basically unattended, meaning no smart
software would be taking care of it, monitoring it, sending alerts, etc.
Also we had our issues with rsync in the past with unreliable networks
like getting error messages for which google returns one or no results
(no pgsql stuff, just system scripts) . No wonder more and more PgSQL
backup solutions move away from rsync.
I agree that this is a concern -- every process needs to be monitored
and if you can avoid the extra step that would be best. But if things
start getting complicated it might be the simpler option.
All well noted! Thank you!
Regards,