On Mon, Dec 15, 2014 at 9:12 AM, Joseph Kregloh <jkregloh@xxxxxxxxxxxxxx> wrote: > Hello, > > I have a master multi slave streaming replication setup. One master and two > slaves. I need to do some maintenance on one of the slaves as one of the > drives died however there is some other weird things going on in that array > that I would need to investigate. So I am expecting the machine to be down > at least two hours. > > I remember reading that if a master cannot connect to the slave it would > hold the log file from shipping. Is there any other way to hold the file > until the slave comes back online? Would it affect both slaves not getting > their files shipped over? > > The good thing is that the slave in question is not serving any connections. > > From what I remember emptying out the archive_command would pause log > shipping. Can the same be done by issuing a pg_stop_backup()? > > Thanks, > -Joseph Kregloh I think you will need to change your archive_command so it saves the WALs to a location reachable by both slaves and the master, and have both slaves pull from the same location. I don't think pg_stop_backup() is useful in this situation. The master will hold the logs as long as archive_command fails [1]. To the extent that archive_command involves connecting to the slave, then yes, Postgres will hold the WAL archives while the slave is down. There are (at least) two reasons that saving the archives to some other location is useful: 1) You don't risk running out of disk on the master due to batched up WALs if a slave goes down. 2) The backup of logs can be used to aid in point-in-time recovery. [1] http://www.postgresql.org/docs/9.1/static/continuous-archiving.html -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general