On 2/28/23 08:35, Maciej Leks wrote:
Am I right in saying that RST it's a design intent of squid to end
connections quickly? I've started digging into the squid code and see
SO_LINGER and timeout set to 0, which means that it's done on purpose
not to hang on connections in TIME_WAIT state?
Does child Squid bump the TLS client connection, tunnel it, or
terminates it (i.e. the child works as a reverse proxy)?
What are those TLS alerts?
If those TLS alerts are close_notify alerts, then the RST packets you
are seeing are probably triggered by the other side doing clean TLS
shutdown (i.e. sending a TLS close_notify alert before closing the
connection), either after receiving a FIN packet (unlikely with
half_closed_clients off?) or while that FIN packet is being generated or
is in-flight. When Such RST packets would be sent by the TCP stack
rather than Squid code itself. They may be an indication of a benign
race condition, a bug, or some other deficiency.
Higher-level information (who initiates closure and why) may be needed
to figure this out. I recommend sharing a link to a compressed ALL,9
cache.log reproducing the problem with a single transaction _combined_
with a matching packet trace file in libpcap format.
https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction
HTH,
Alex.
wt., 28 lut 2023 o 08:12 Maciej Leks <maciej.leks@xxxxxxxxx> napisał(a):
For a couple of days we've been encountering many ECONNRESET error
messages in our nodejs client to Salesforce. Even if access.log shows
only /200 tshark shows a messy communication between client->squid
(child squid).
The architecture: nodejs client->3..* child squid->2*parent
squids->cloud Salesforce
Some examples ([RST]]: 46 1.015513576 100.121.10.169 → 100.113.27.73
TCP 66 46098 → 3128 [ACK] Seq=1721 Ack=140135 Win=214656 Len=0
TSval=2672547296 TSecr=1443424287
47 1.016152326 100.113.27.73 → 100.121.10.169 TCP 66 3128 → 46098
[FIN, ACK] Seq=140135 Ack=1721 Win=42368 Len=0 TSval=1443424288
TSecr=2672547296
48 1.017856001 100.121.10.169 → 100.113.27.73 TLSv1.2 97 Encrypted Alert
49 1.017893411 100.121.10.169 → 100.113.27.73 TCP 66 46098 → 3128
[FIN, ACK] Seq=1752 Ack=140136 Win=214656 Len=0 TSval=2672547298
TSecr=1443424288
50 1.018002285 100.113.27.73 → 100.121.10.169 TCP 54 3128 → 46098
[RST] Seq=140136 Win=0 Len=0
51 1.018019806 100.113.27.73 → 100.121.10.169 TCP 54 3128 → 46098
[RST] Seq=140136 Win=0 Len=0
[RST,ACK]:
592 67.664585034 100.121.10.169 → 100.113.27.73 TLSv1.2 97 Encrypted Alert
593 67.664737552 100.113.27.73 → 100.121.10.169 TCP 66 3128 → 52202
[ACK] Seq=7973 Ack=1129 Win=42752 Len=0 TSval=1443490937
TSecr=2672613945
594 67.664841613 100.121.10.169 → 100.113.27.73 TCP 66 52202 → 3128
[FIN, ACK] Seq=1129 Ack=7973 Win=42368 Len=0 TSval=2672613945
TSecr=1443490937
595 67.664895660 100.113.27.73 → 100.121.10.169 TCP 66 3128 → 52202
[RST, ACK] Seq=7973 Ack=1129 Win=42752 Len=0 TSval=1443490937
TSecr=2672613945
596 67.664936264 100.113.27.73 → 100.121.10.169 TCP 54 3128 → 52202
[RST] Seq=7973 Win=0 Len=0
I'm wondering how to debug it (what exactly) and whether the reason
may be on the squid side (specific configuration?).
env: GKE/k8s
client container: alpine linux
child squid container: alpine linux
version: 5.7
Cheers,
Maciek Leks
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users