Re: PostgreSQL 10.5 : Logical replication timeout results in PANIC in pg_wal "No space left on device"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Nov 14, 2018, at 3:31 AM, Achilleas Mantzios <achill@xxxxxxxxxxxxxxxxxxxxx> wrote:
> 
> Our sysadms (seasoned linux/network guys : we have been working here for more than 10 yrs) were absolute in that we run no firewall or other traffic shaping system between the two hosts. (if we did the problem would manifest itself earlier).  Can you recommend what to look for exactly regarding both TCP stacks ? The subscriber node is a clone of the primary. We have :
> 
> # sysctl -a | grep -i keepaliv
> net.ipv4.tcp_keepalive_intvl = 75
> net.ipv4.tcp_keepalive_probes = 9
> net.ipv4.tcp_keepalive_time = 7200
> 

Those keep alive settings are linux’s defaults and work out to be 18 hours before the abandon connection is dropped.  So, the WAL receiver should have corrected itself after that time.  For reference, I run terminating abandon session within 15 mins as they take-up valuable database resources and could potentially hold on to locks, snapshots, etc.

I haven’t used Postgres keep alive setting as I find the OS handles it just fine.


> Also in addition to what you say (netstat, tcpdump) if I detect such a case (even with the primary panic'ed -- yeah this would take quite some nerves to do this) I will connect with gdb and take a stack trace to know what the worker is doing and why doesn't it restart.
> 
> 

Do a netstat -an; that will show you all the network connections and their current state.  If you do this on both systems you should find corresponding entries/states for the replication stream.  

i.e. sample output:

Active Internet connections (including servers)
Proto Recv-Q Send-Q Local Address          Foreign Address        (state)
tcp4       0      0 10.10.10.1.50604          10.10.10.2.5432       ESTABLISHED


In your case; what I think might have happened is that upstream server would not have an entry or it would be in one of the FIN states where the downstream server would have EST connection with the Send-Q backlog.  If the servers were communicating then the upstream server would have responded with a reset packet to the downstream thus forcing the session to terminate. 

Using root account you could have seen what was transpiring on the given connection; i.e. 

tcpdump -i eth0 port 50604







[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux