On 2/28/23 17:15, Maciej Leks wrote:
What are those TLS alerts?
Code 21 - decryption_failed_RESERVED(21).
Are you sure that 21 is actually the alert description ID and _not_ the
TLS message content type (all alert messages have TLS content type 21)?
TLSv1.2 Record Layer: Encrypted Alert
Content Type: Alert (21)
Version: TLS 1.2 (0x0303)
Length: 26
Alert Message: Encrypted Alert
This is _not_ a decryption_failed_RESERVED alert (ContentType=21
AlertDescription=21)!
This is an alert (ContentType=21) with a AlertDescription ID unknown to
us. Wireshark cannot report that AlertDescription ID to us because you
did not configure Wireshark to decrypt captured TLS traffic. For
example, it could be a benign close_notify alert (ContentType=21
AlertDescription=0).
Hopefully, there will be fewer unknowns after you share debugging logs
from lab tests.
HTH,
Alex.
nodejs client->squid->server (the server's only role is to accept a
connection and then send RST flag back).
And you now what I got on client side? 502 from the squid.
So, I'm asking just because it looks like Salesforce alert and RST but
at the same time it's not fit my observations from the lab.
When I was watching some pcacps from the parent squid side I've seen
not many RST from SF's, a few from parent squid side but I've got a
lot of them on my child squid side.
You ask me a very good question. The more I look the less I understand :)
Tomorrow morning I'll turn on debugging.
If those TLS alerts are close_notify alerts, then the RST packets you
are seeing are probably triggered by the other side doing clean TLS
shutdown (i.e. sending a TLS close_notify alert before closing the
connection), either after receiving a FIN packet (unlikely with
half_closed_clients off?) or while that FIN packet is being generated or
is in-flight. When Such RST packets would be sent by the TCP stack
rather than Squid code itself. They may be an indication of a benign
race condition, a bug, or some other deficiency.
half_closed_clients is off
Higher-level information (who initiates closure and why) may be needed
to figure this out. I recommend sharing a link to a compressed ALL,9
cache.log reproducing the problem with a single transaction _combined_
with a matching packet trace file in libpcap format.
There is no other way than turning debug on.
Maciek
wt., 28 lut 2023 o 15:15 Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx
<mailto:rousskov@xxxxxxxxxxxxxxxxxxxxxxx>> napisał(a):
On 2/28/23 08:35, Maciej Leks wrote:
> Am I right in saying that RST it's a design intent of squid to end
> connections quickly? I've started digging into the squid code and see
> SO_LINGER and timeout set to 0, which means that it's done on purpose
> not to hang on connections in TIME_WAIT state?
Does child Squid bump the TLS client connection, tunnel it, or
terminates it (i.e. the child works as a reverse proxy)?
What are those TLS alerts?
If those TLS alerts are close_notify alerts, then the RST packets you
are seeing are probably triggered by the other side doing clean TLS
shutdown (i.e. sending a TLS close_notify alert before closing the
connection), either after receiving a FIN packet (unlikely with
half_closed_clients off?) or while that FIN packet is being
generated or
is in-flight. When Such RST packets would be sent by the TCP stack
rather than Squid code itself. They may be an indication of a benign
race condition, a bug, or some other deficiency.
Higher-level information (who initiates closure and why) may be needed
to figure this out. I recommend sharing a link to a compressed ALL,9
cache.log reproducing the problem with a single transaction _combined_
with a matching packet trace file in libpcap format.
https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction <https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction>
HTH,
Alex.
> wt., 28 lut 2023 o 08:12 Maciej Leks <maciej.leks@xxxxxxxxx
<mailto:maciej.leks@xxxxxxxxx>> napisał(a):
>>
>> For a couple of days we've been encountering many ECONNRESET error
>> messages in our nodejs client to Salesforce. Even if access.log
shows
>> only /200 tshark shows a messy communication between client->squid
>> (child squid).
>>
>> The architecture: nodejs client->3..* child squid->2*parent
>> squids->cloud Salesforce
>>
>> Some examples ([RST]]: 46 1.015513576 100.121.10.169 → 100.113.27.73
>> TCP 66 46098 → 3128 [ACK] Seq=1721 Ack=140135 Win=214656 Len=0
>> TSval=2672547296 TSecr=1443424287
>>
>> 47 1.016152326 100.113.27.73 → 100.121.10.169 TCP 66 3128 →
46098
>> [FIN, ACK] Seq=140135 Ack=1721 Win=42368 Len=0 TSval=1443424288
>> TSecr=2672547296
>> 48 1.017856001 100.121.10.169 → 100.113.27.73 TLSv1.2 97
Encrypted Alert
>> 49 1.017893411 100.121.10.169 → 100.113.27.73 TCP 66 46098 →
3128
>> [FIN, ACK] Seq=1752 Ack=140136 Win=214656 Len=0 TSval=2672547298
>> TSecr=1443424288
>> 50 1.018002285 100.113.27.73 → 100.121.10.169 TCP 54 3128 →
46098
>> [RST] Seq=140136 Win=0 Len=0
>> 51 1.018019806 100.113.27.73 → 100.121.10.169 TCP 54 3128 →
46098
>> [RST] Seq=140136 Win=0 Len=0
>>
>> [RST,ACK]:
>> 592 67.664585034 100.121.10.169 → 100.113.27.73 TLSv1.2 97
Encrypted Alert
>> 593 67.664737552 100.113.27.73 → 100.121.10.169 TCP 66 3128 →
52202
>> [ACK] Seq=7973 Ack=1129 Win=42752 Len=0 TSval=1443490937
>> TSecr=2672613945
>> 594 67.664841613 100.121.10.169 → 100.113.27.73 TCP 66 52202
→ 3128
>> [FIN, ACK] Seq=1129 Ack=7973 Win=42368 Len=0 TSval=2672613945
>> TSecr=1443490937
>> 595 67.664895660 100.113.27.73 → 100.121.10.169 TCP 66 3128 →
52202
>> [RST, ACK] Seq=7973 Ack=1129 Win=42752 Len=0 TSval=1443490937
>> TSecr=2672613945
>> 596 67.664936264 100.113.27.73 → 100.121.10.169 TCP 54 3128 →
52202
>> [RST] Seq=7973 Win=0 Len=0
>>
>> I'm wondering how to debug it (what exactly) and whether the reason
>> may be on the squid side (specific configuration?).
>>
>> env: GKE/k8s
>> client container: alpine linux
>> child squid container: alpine linux
>> version: 5.7
>>
>> Cheers,
>> Maciek Leks
> _______________________________________________
> squid-users mailing list
> squid-users@xxxxxxxxxxxxxxxxxxxxx
<mailto:squid-users@xxxxxxxxxxxxxxxxxxxxx>
> http://lists.squid-cache.org/listinfo/squid-users
<http://lists.squid-cache.org/listinfo/squid-users>
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
<mailto:squid-users@xxxxxxxxxxxxxxxxxxxxx>
http://lists.squid-cache.org/listinfo/squid-users
<http://lists.squid-cache.org/listinfo/squid-users>
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users