Arnaud Lesauvage schrieb:
Tino Wildenhain a écrit :
personally I think the WAL approach is by far easier
to set up and maintain - the pg_dump is in fact easy,
but the restore to another database can be tricky
if you want it unattended and bullit-proof the same
time.
I'll have to study this more in-depth then.
If I got it right, the procedure would be :
- wal archiving enabled
- base backup once a day (pg_start_backup, copy the 'data' directory,
pg_stop_backup)
I'd think you can skip that and just do it once at the very beginning.
But if you like to use the WAL files to recover a 3rd, new system, this
would be a good approach.
- create restore-script to be run on the second server, which would :
- copy the backup to the 'data' directory
- copy the wal files to the 'pg_xlog' directory
- create the recovery.conf in the data directory (should always stay
there maybe)
- start the postmaster
Then anyone could just run this script in case of a failure of the
master server to have an up-to-date database running.
Actually you would let the (master-) server run the script
and let it (trigger) copy and import of the WAL segment as
it gets ready. This way your backup server is very close
to the current state and you dont loose much if the first
machine completely dies suddenly.
Then with a script that would change my DNS so that
mypgserver.domain.tld (used in ODBC connection string) points to CNAME
mybackupserver.domain.tld instead of CNAME mymasterserver.domain.tld,
getting back to production ould be quite easy...?
I guess the users would start over anyway. So easiest if you
provide a copy of the app with that other connection and
signal them if the first server dies to just close the first
and start the backup-application.
Regards
Tino