Hi, We're trying to figure out how a Docker NAT bridge occasionally sends out an undesired TCP RST packet, which aborts the TCP connection unexpectedly. We checked the captured packets. Before the issue happens, I think all the packets look just normal (correct TCP timestamps, correct Seqs/Acks, etc.). However, occasionally, after the Docket bridge receives a normal TCP packet with a payload of several hundred bytes, the bridge immediately sends out a TCP Reset packet to abort the connection, while we expect the bridge to forward the packet to the internal docker instance. The typical interaction looks like this: A: (the docket instance): 172.17.0.2 B: (the bridge): 10.35.4.56 C: (the remote server): 40.121.XX.YY. 1) A sends a TCP packet, through B, to C; 2) C's reply reaches to B; 3) B immediately sends out a TCP RST packet to C; 4) A thinks C doesn't receive the packet, so A re-transmits the packet 7 times, through B; B still does the normal NAT translation, and forwards all the 7 packets to C; there is no response from C (I suppose C ignores the packets); 5) A closes the connection by sending a TCP FIN packet; B still does the normal NAT translation, and forwards the packet to C; there is no response from C. We need to figure out what happens in step 3. It looks the bridge thinks something bad happened so it tries to abort the TCP connection? If so, why does it still forward the retransmitted packets from A to C? There are not a lot of concurrent TCP connections: usually there are only about 5 concurrent TCP connections, so I don't think the conntrack module runs out of the tracking table entries. We have checked "conntrack -L" and there are only about 700 entries. We found some similar issues (out-of-window packets are marked INVALID, causing TCP RST) : https://github.com/docker/libnetwork/issues/1090 https://github.com/kubernetes/kubernetes/issues/74839 and we tried the workarounds mentioned there (but it looks we're facing a different issue, since all the packets we checked are normal): echo 1 > /proc/sys/net/netfilter/nf_conntrack_tcp_be_liberal iptables -I INPUT -m conntrack --ctstate INVALID -j DROP But it made no difference. BTW, we're running a 4.15-based kernel. Can you please recommend some tools that can trace how exactly the TCP packet flow is processed by iptables/conntrack, especially in the case of NAT? Now I'm studying some tools like ipset, nft and ulogd2. It looks we're able to log some iptables/conntrack events when tracing the packet flows, but I'm unsure if we're able to log the event of the undesired TCP Reset packet here. Thank you! -- Dexuan