Hi Adrian,
thank you again for your continued patience.
On 10 March 2016 00:20:13 +01:00, Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote:
On 03/09/2016 10:41 AM, fredrik@xxxxxxxxxxxxx wrote:Hi Adrian,thank you very much for your patience. I apologise for the missinginformation.On 9 March 2016 16:13:00 +01:00, Adrian Klaver<adrian.klaver@xxxxxxxxxxx> wrote:On 03/09/2016 04:56 AM, fredrik@xxxxxxxxxxxxx<mailto:fredrik@xxxxxxxxxxxxx> wrote:Hi Adrian,thank you very much for your response.I ran the "VACUUM ANALYZE" command on the master node.Regarding log messages.Here is the contents of the log (excludingconnections/disconnections):Assuming the below is from the replica database.the "LOG: recovery was paused" message was indeed from the replica.2016-02-22 02:30:08 GMT 24616 LOG: recovery has pausedSo what happened to cause the above?we automatically pause recovery on the replica before running pg_dump.This is in order to make certain that we get a consistent dump of thedatabase.Still muddling through this, but to recap and be clear in my mind:1) This only started occurring with 9.1.15, but worked in previous versions of 9.1.So what was the last version of 9.1 that worked?
The previous version for us was 9.1.13
It *is* true that it has never happened before, and also that it has happened twice with 9.1.15.
It should be noted that we *are* executing this process in many automated environments. This means that we have definitely executed the full process you describe here more than a hundred times for 9.1.15, probably even more than 200. This means that the process has worked for us more than 99% of the time, and it even crept through an extended QA phase.
This means that to me, we are most likely not looking for something that "does not work", but more likely for some corner case.
2) You seed a replica with pg_basebackup.
yes.
3) You set up synchronous streaming replication to the replica.
yes
4) You pause the replication and use pg_dump to dump the replica.
yes
5) At this point the error in the subject has occurred twice since you switched to 9.1.5
precisely
6) Running full vacuum analyze on the master solves the problem.How is it solved?In other words do you resume replication after the vacuum, or before?Then do you redo the pg_dump?Or do you start over with a new pg_basebackup?
I must admit that I am a bit confused here. My recollection is that the *only* thing I did was to run VACUUM ANALYZE on the master and immediately after I was able to successfully run pg_dump.
I realise that this does not correspond very well to postgresql stating that the replay has been paused.
I wonder if I might be mistaken and have run VACUUM ANALYZE on the slave node instead.
I am not seeing anything below that indicates the recovery started again.the reason why we do not see a matching "resume" is that the pg_dumpfailed and our error handling was insufficient.
Looking at this thread again, I wonder; if our errorhandling had been better, and we had automatically resumed streaming, even with the error in the pg_dump phase... I wonder if the problem would have gone away.
Is it possible that we can be unlucky when pausing replication, in a way that would not permit us to run a pg_dump? (perhaps if we factor in the "stale collector" warning below?)
2016-02-22 02:30:08 GMT 24616 HINT: Execute pg_xlog_replay_resume() tocontinue.2016-02-22 02:37:19 GMT 23859 DBNAME ERROR: missing chunk number 0 fortoast value 2747579 in pg_toast_220662016-02-22 02:37:19 GMT 23859 DBNAME STATEMENT: COPY public.room_shape(room_uuid, data) TO stdout;2016-02-22 02:37:41 GMT 2648 DBNAME LOG: could not receive data fromclient: Connection reset by peer2016-02-22 02:37:41 GMT 2648 DBNAME LOG: unexpected EOF on clientconnectionWhat does the log from the master show?It doesnt seem to show much. It does have these repeated messages, however:2016-02-22 02:12:18 GMT 30908 LOG: using stale statistics instead ofcurrent ones because stats collector is not responding2016-02-22 02:13:01 GMT 30908 LOG: using stale statistics instead ofcurrent ones because stats collector is not responding2016-02-22 02:13:52 GMT 30908 LOG: using stale statistics instead ofcurrent ones because stats collector is not respondingThere are lots of these mesages within the timeframe. There seems to bea couple of them every 2-4 hours.This is usually a sign of resource starvation. I see this on an old machine, at night, when I run some intensive file system backups. I figured it out by looking at my crontab. The problems such as they are is that the messages fill up logs and your statistics become, as the message says, stale for how ever long the collector does not respond.
Thank you very much, this is great feedback. I will try to determine what put the master under such load.
Best regards,Fredrik HuitfeldtOn 7 March 2016 16:35:29 +01:00, Adrian KlaverOn 03/06/2016 10:18 PM, fredrik@xxxxxxxxxxxxx<mailto:fredrik@xxxxxxxxxxxxx>wrote:HI All,i would really appreciate any help I can get on this issue.basically, a pg_basebackup + streaming attach, led to a databasethat wecould not read from afterwards.From original post:"The issue remained until we ran a full vacuum analyze on thecluster."Which cluster was that, the master or the slave?"I have logfiles from the incident, but I cannot see anythingout ofthe ordinary (despite having a fair amount of experienceinvestigatingpostgresql logs)."Can we see the section before and after ERROR?Beset regards,FredrikPS please advise if this is better posted on another list.--Adrian Klaver<mailto:adrian.klaver@xxxxxxxxxxx<mailto:adrian.klaver@xxxxxxxxxxx>>--Adrian KlaverBest regards,Fredrik--Adrian Klaver
Thank you for your help so far,
Fredrik