Daniel,I use a custom script that I developed as we needed a reliable copy to a pair a standby servers. That script knows which WAL segment has been successfully copied to which server, and when it knows that both copies were successful, it deletes the segment from the master. However, if one of the copies failed, the WAL segment is left on the primary where it is tried again the next time it gets called. So far, it seems fairly reliable and consistent.--Jay
Sent from my iPadI’m curious how others are cleaning up their WAL archive files and other miscellany created by the process.
Here’s my setup:
Server A: archive_command executes a script that rsync’s the archive file to a local folder and a matching folder on the standby server (does not mirror the folder, just pushes the file twice)
Server B: recovery.conf archive_cleanup_command uses pg_archivecleanup to clean the standby archive folder and sends log output to a cleanup.log file
What I’m left with:
- WAL archive files on the Master server that only get cleaned up if I fail over and recover using a new pg_basebackup from Server B to Server A
- .history and .backup files on the standby server
- Entries in the cleanup.log file
Right now, I’m thinking my cleanup will involve (every 6 months):
- Failing over my cluster (kidding)
- Truncating the cleanup.log file
- ?? with .history and .backup files
What are you doing?
you can also check out omnipitr_cleanup https://github.com/omniti-labs/omnipitr/blob/master/doc/omnipitr-cleanup.pod
On Wed, Oct 15, 2014 at 12:27 PM, <jayknowsunix@xxxxxxxxx> wrote: