On 2/28/23 15:34, Maciej Leks wrote:
wt., 28 lut 2023 o 20:06 Alex Rousskov
<rousskov@xxxxxxxxxxxxxxxxxxxxxxx> napisał(a):
On 2/28/23 13:07, Maciej Leks wrote:
Does child Squid bump the TLS client connection, tunnel it, or
terminates it (i.e. the child works as a reverse proxy)?
* client connects to child Squid's http_port without "ssl-bump".
* client sends a plain text CONNECT request to child Squid.
* child Squid connects to parent using cache_peer with mTLS.
I assume that, by mTLS, you mean a cache_peer directive with "tls"
option _and_ with "sslcert" (or equivalent) option to specify the TLS
client certificate to be sent to the cache peer.
I assume that the packets you pasted earlier were between a child Squid
and its parent.
Exactly.
What are those TLS alerts?
Code 21 - decryption_failed_RESERVED(21).
Are you sure that 21 is actually the alert description ID and _not_ the
TLS message content type (all alert messages have TLS content type 21)?
See "ContentType" and "AlertDescription" definitions in RFC 8446. Below,
I will assume that it is the former because you said
"decryption_failed_RESERVED" above, but this looks very suspicious!
Where did you get that "21" from? Perhaps you can show Wireshark
dissection of the corresponding alert message?
According to obsoleted RFC 5246 (TLS v1.2):
decryption_failed_RESERVED
This alert was used in some earlier versions of TLS, and may have
permitted certain attacks against the CBC mode [CBCATT]. It MUST
NOT be sent by compliant implementations.
I would focus on this error and ignore RST packets for now. Most likely,
this error is what triggers those packets.
According to your experience - who sends this TLS "Encrypted Alert"
and RST - either the server (Salesforce) or parent squid?
FWIW, I do not recall seeing these specific alerts. It might be
Salesforce greasing TLS messages, I guess, but I cannot quickly find
evidence of alerts being used for that purpose (RFC 8701 does not
mention this greasing vector AFAICT). Typical greased values do not
violate MUSTs, but perhaps they can violate MUSTs from obsoleted RFCs?
Or this is not a decryption_failed_RESERVED alert ;-).
However, I am surprised you are asking the list this question. Don't you
know (e.g., from packet captures) who is sending the alert in question?!
Cheers,
Alex.
If those TLS alerts are close_notify alerts, then the RST packets you
are seeing are probably triggered by the other side doing clean TLS
shutdown (i.e. sending a TLS close_notify alert before closing the
connection), either after receiving a FIN packet (unlikely with
half_closed_clients off?) or while that FIN packet is being generated or
is in-flight. When Such RST packets would be sent by the TCP stack
rather than Squid code itself. They may be an indication of a benign
race condition, a bug, or some other deficiency.
half_closed_clients is off
Higher-level information (who initiates closure and why) may be needed
to figure this out. I recommend sharing a link to a compressed ALL,9
cache.log reproducing the problem with a single transaction _combined_
with a matching packet trace file in libpcap format.
There is no other way than turning debug on.
Maciek
wt., 28 lut 2023 o 15:15 Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx
<mailto:rousskov@xxxxxxxxxxxxxxxxxxxxxxx>> napisał(a):
On 2/28/23 08:35, Maciej Leks wrote:
> Am I right in saying that RST it's a design intent of squid to end
> connections quickly? I've started digging into the squid code and see
> SO_LINGER and timeout set to 0, which means that it's done on purpose
> not to hang on connections in TIME_WAIT state?
Does child Squid bump the TLS client connection, tunnel it, or
terminates it (i.e. the child works as a reverse proxy)?
What are those TLS alerts?
If those TLS alerts are close_notify alerts, then the RST packets you
are seeing are probably triggered by the other side doing clean TLS
shutdown (i.e. sending a TLS close_notify alert before closing the
connection), either after receiving a FIN packet (unlikely with
half_closed_clients off?) or while that FIN packet is being
generated or
is in-flight. When Such RST packets would be sent by the TCP stack
rather than Squid code itself. They may be an indication of a benign
race condition, a bug, or some other deficiency.
Higher-level information (who initiates closure and why) may be needed
to figure this out. I recommend sharing a link to a compressed ALL,9
cache.log reproducing the problem with a single transaction _combined_
with a matching packet trace file in libpcap format.
https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction <https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction>
HTH,
Alex.
> wt., 28 lut 2023 o 08:12 Maciej Leks <maciej.leks@xxxxxxxxx
<mailto:maciej.leks@xxxxxxxxx>> napisał(a):
>>
>> For a couple of days we've been encountering many ECONNRESET error
>> messages in our nodejs client to Salesforce. Even if access.log
shows
>> only /200 tshark shows a messy communication between client->squid
>> (child squid).
>>
>> The architecture: nodejs client->3..* child squid->2*parent
>> squids->cloud Salesforce
>>
>> Some examples ([RST]]: 46 1.015513576 100.121.10.169 → 100.113.27.73
>> TCP 66 46098 → 3128 [ACK] Seq=1721 Ack=140135 Win=214656 Len=0
>> TSval=2672547296 TSecr=1443424287
>>
>> 47 1.016152326 100.113.27.73 → 100.121.10.169 TCP 66 3128 →
46098
>> [FIN, ACK] Seq=140135 Ack=1721 Win=42368 Len=0 TSval=1443424288
>> TSecr=2672547296
>> 48 1.017856001 100.121.10.169 → 100.113.27.73 TLSv1.2 97
Encrypted Alert
>> 49 1.017893411 100.121.10.169 → 100.113.27.73 TCP 66 46098 →
3128
>> [FIN, ACK] Seq=1752 Ack=140136 Win=214656 Len=0 TSval=2672547298
>> TSecr=1443424288
>> 50 1.018002285 100.113.27.73 → 100.121.10.169 TCP 54 3128 →
46098
>> [RST] Seq=140136 Win=0 Len=0
>> 51 1.018019806 100.113.27.73 → 100.121.10.169 TCP 54 3128 →
46098
>> [RST] Seq=140136 Win=0 Len=0
>>
>> [RST,ACK]:
>> 592 67.664585034 100.121.10.169 → 100.113.27.73 TLSv1.2 97
Encrypted Alert
>> 593 67.664737552 100.113.27.73 → 100.121.10.169 TCP 66 3128 →
52202
>> [ACK] Seq=7973 Ack=1129 Win=42752 Len=0 TSval=1443490937
>> TSecr=2672613945
>> 594 67.664841613 100.121.10.169 → 100.113.27.73 TCP 66 52202
→ 3128
>> [FIN, ACK] Seq=1129 Ack=7973 Win=42368 Len=0 TSval=2672613945
>> TSecr=1443490937
>> 595 67.664895660 100.113.27.73 → 100.121.10.169 TCP 66 3128 →
52202
>> [RST, ACK] Seq=7973 Ack=1129 Win=42752 Len=0 TSval=1443490937
>> TSecr=2672613945
>> 596 67.664936264 100.113.27.73 → 100.121.10.169 TCP 54 3128 →
52202
>> [RST] Seq=7973 Win=0 Len=0
>>
>> I'm wondering how to debug it (what exactly) and whether the reason
>> may be on the squid side (specific configuration?).
>>
>> env: GKE/k8s
>> client container: alpine linux
>> child squid container: alpine linux
>> version: 5.7
>>
>> Cheers,
>> Maciek Leks
> _______________________________________________
> squid-users mailing list
> squid-users@xxxxxxxxxxxxxxxxxxxxx
<mailto:squid-users@xxxxxxxxxxxxxxxxxxxxx>
> http://lists.squid-cache.org/listinfo/squid-users
<http://lists.squid-cache.org/listinfo/squid-users>
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
<mailto:squid-users@xxxxxxxxxxxxxxxxxxxxx>
http://lists.squid-cache.org/listinfo/squid-users
<http://lists.squid-cache.org/listinfo/squid-users>
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users