Actually, I do use streaming 🙂
Our scenario is a bit more complex than ordinay wal archive or streaming.
With this setup we are able to backup 1.5 TB of data in less than 12 hours even with geographically distributted servers, and we have production.
1) We do have streaming from production to report server (geographically distributted)
2) Wal archive is setup from standby server to backup server (same location)
3) From time-to-time, we do barman backups from standby server (same location)
4) Twice a week, we restore every single database in a backup server to test backups.
Besides non standard, this setup is working really well for our needs.
But not everthing is shine like gold, and sometimes, for low traffic servers, barman complains not all wal segments were received.
Then we need to manually execute pg_switch_xlog at master, and after a "barman check-database" at backup server: this is what we would like to automate.
And it went well with a bash script and cron. At least, for 40 databases it is working really well.
Regards,
Edson
|