Not sure about the root cause but I can make these observations and raise some questions:
1) 9.6.6 is five bug fix versions behind
2) 300GB is so big a table, wouldn't make sense to you to partition it ?
2a) or if it's partitioned, doesn't the time of creation or dropping of new partitions match the time of the conflict?
3) can you track long running transactions on the master?
4) what are the isolation levels on master / replica?
5) check for active locks in the replica, I guess you should see some blocked transactions during big delays, I've seen this in the past when standby_feedback is turned off.
6) any out of the ordinary messages in the replica's logs? any evidence that has been canceling statements?
7) are master and replica exactly the same in terms of resources and main parameters?
8) how is performance in both nodes while the big delay is happening? IO / cpu load / etc.
Also, check this out:
Am Mo., 19. Nov. 2018 um 21:46 Uhr schrieb Wyatt Alt <wyatt.alt@xxxxxxxxx>:
Sorry, I see now there was a similar question a few days ago:Two ideas proposed (aside from disconnects):* Autovacuum is truncating a page on the master and taking an AccessExclusiveLock on the table in use on the replica* A "pin conflict", which I'm still unfamiliar with.The user's response says they are in the first bucket, but the argument relies on max_standby_streaming_delay set to -1, while mine is 5 minutes. I need to understand pin conflicts better, but the likely scenario Andrew outlined doesn't apply to me. My offending queries were doing bitmap heap scans on a 300GB table.Reading the thread I see Andres ask for the "precise conflict" the user gets -- is there a way I can get that without a datadir? And to re-frame the original question, are there causes of replication lag that max_standby_streaming_delay would not be expected to prevent, that would be resolved by killing long standby queries? If so, what's the best way to confirm?WyattOn Mon, Nov 19, 2018 at 5:46 PM Wyatt Alt <wyatt.alt@xxxxxxxxx> wrote:I've been struggling to eliminate replication lag on a Postgres 9.6.6 instance on Amazon RDS. I believe the lag is caused by early cleanup conflicts from vacuums on the master, because I can reliably resolve it by killing long-running queries on the standby. I most recently saw ten hours of lag on Saturday and addressed it this way.The standby is running withhot_standby_feedback = onmax_standby_streaming_delay = 5minmax_standby_archive_delay = 30sI am not using replication slots on the primary due to reported negative interactions with pg_repack on large tables.My rationale for the first two settings is that hot_standby_feedback should address my issues almost all the time, but that max_standby_streaming_delay would sometimes be necessary as a fallback, for instance in cases of a transient connection loss between the standby and primary. I believe these settings are mostly working, because lag is less frequent than it was when I configured them.My questions are,* Am I overlooking anything in my configuration?* What would explain lag caused by query conflicts given the max_standby_streaming_delay setting? Shouldn't those queries be getting killed?* Is there any particular diagnostic info I should be collecting on the next occurrence, to help me figure out the cause? Note that as I'm on RDS, I don't have direct access to the datadir -- just psql.Thanks for any advice!Wyatt
--