Greetings, * Edson Carlos Ericksson Richter (richter@xxxxxxxxxxxxxx) wrote: > No backup solution (no matter which one you choose) is 100% guaranteed: your > disks may fail, your network mail fail, your memory may fail, files get > corrupted - so, setup a regular "restore" to separate "test backup server" > on daily basis. Having a virtual server for this purpose has minimal budget > impact if any at all, and you save your sanity in case of a disaster. While performing a straight restore is definitely good, to deal with the in-place corruption risk of whatever your backup repository is, you really need to do more than that. If the middle of some index gets corrupted in the backup, you may not notice it on the restore and even with casual use of the restored server, which is why robust backup software really should have a manifest of all the files in the backup and their checksums and that should be checked on a restore. One alternative, if your backup solution doesn't handle this for you, and if you have page-level checksums enabled for your PG cluster (which I strongly recommend...) would be to perform the complete restore and then run pg_verify_checksums (or pg_checksums, depending on version) on the restored cluster (note that you should bring the cluster up and let WAL replay go through to at least reach consistency too...), which will hopefully pick up on and report any latent corruption. Note that doing something like a simple 'pg_dump' on the restored cluster won't check the page-level checksums in indexes (or check indexes at all), though that would provide you with a logical export of the data that should be able to be imported into a new cluster (assuming you keep the results of the pg_dump, that is..). Thanks! Stephen
Attachment:
signature.asc
Description: PGP signature