http://www.postgresql.org/docs/9.0/static/different-replication-solutions.html
I would just make 2 copies of the WAL file one for each slave in different folders.
That way if one slave is offline for a period of time it can catch up when it comes back online.
On Fri, May 20, 2011 at 6:59 AM, Ben Lancaster <benlancaster@xxxxxxxxxxxx> wrote:
Hi,
First post, forgive me if this is better suited to pgsql-general.
I've got streaming replication set up with two slave servers (PostgreSQL 9.0 on Ubuntu 10.04 LTS). The master pushes the WAL to an NFS export, which is in turn mounted on and picked up by the two slaves.
The problem I have is that pg_archivecleanup (running on one of the slaves) was removing WAL logs before the other slave had picked up the changes, thus breaking replication for the second slave. As an interim fix, I simply disabled the automatic cleanup and figured I'd worry about it later.
Well, later is now and I'm running out of HDD space. So, what's the best (or perhaps, correct) way to handle cleaning up WAL archives when there's more than one slave? My first thought was prefixing the pg_archivecleanup call in recovery.conf's archive_cleanup_command with a "sleep" of a few seconds to allow both slaves to pick up changes before WAL files are cleaned up, but I'm afraid I'll end up with some weird race conditions, with loads of sleeping processes waiting to cleanup WAL files that have previously been cleaned up by a recently awoken process.
Thanks in advance,
Ben
--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin