Search squid archive

Re: Squid fails to bump where there are too many DNS names in SAN field

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



With debug_options ALL,9 and retrieving just this page, I found the following relevant loglines (this is with an explicit CONNECT request) ,

2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(30) SBuf: SBuf6005084 created
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: TLSPlaintext.type=22 occupying 1 bytes @91 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: TLSPlaintext.version.major=3 occupying 1 bytes @92 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: TLSPlaintext.version.minor=3 occupying 1 bytes @93 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: TLSPlaintext.fragment.length=16384 occupying 2 bytes @94 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(38) SBuf: SBuf6005085 created from id SBuf6005054
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(74) got: TLSPlaintext.fragment.octets= <16384 OCTET Bytes fit here> 
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005085 destructed
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(57) got: TLSPlaintext occupying 16389 bytes @91 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 16384 for SBuf6005052
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(886) cow: SBuf6005052 new size:16470
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(857) reAlloc: SBuf6005052 new size: 16470
2018/09/04 12:45:46.112 kid1| 24,9| MemBlob.cc(56) MemBlob: constructed, this=0x1dd2860 id=blob1555829 reserveSize=16470
2018/09/04 12:45:46.112 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555829 memAlloc: requested=16470, received=16470
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6005052 new store capacity: 16470
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(85) assign: assigning SBuf6005056 from SBuf6005052
2018/09/04 12:45:46.112 kid1| 24,9| MemBlob.cc(82) ~MemBlob: destructed, this=0x1dd27a0 id=blob1555826 capacity=65535 size=8208
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(30) SBuf: SBuf6005086 created
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: Handshake.msg_type=11 occupying 1 bytes @86 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: Handshake.msg_body.length=16900 occupying 3 bytes @87 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,5| BinaryTokenizer.cc(47) want: 520 more bytes for Handshake.msg_body.octets occupying 16900 bytes @90 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005086 destructed
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005084 destructed
2018/09/04 12:45:46.112 kid1| 83,5| Handshake.cc(532) parseHello: need more data
2018/09/04 12:45:46.112 kid1| 83,7| bio.cc(168) stateChanged: FD 15 now: 0x1002 23RSHA (SSLv2/v3 read server hello A)
2018/09/04 12:45:46.112 kid1| 83,5| PeerConnector.cc(451) noteWantRead: local=10.240.180.31:43716 remote=103.243.13.183:443 FD 15 flags=1
2018/09/04 12:45:46.112 kid1| 5,3| comm.cc(559) commSetConnTimeout: local=10.240.180.31:43716 remote=103.243.13.183:443 FD 15 flags=1 timeout 60
2018/09/04 12:45:46.112 kid1| 5,5| ModEpoll.cc(117) SetSelect: FD 15, type=1, handler=1, client_data=0x2818f58, timeout=0
2018/09/04 12:45:46.112 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 0x2818f58
2018/09/04 12:45:46.112 kid1| 83,7| AsyncJob.cc(154) callEnd: Ssl::PeekingPeerConnector status out: [ FD 15 job194701]
2018/09/04 12:45:46.112 kid1| 83,7| AsyncCallQueue.cc(57) fireNext: leaving Security::PeerConnector::negotiate()
Later on after about 10 secs

2018/09/04 12:45:58.124 kid1| 83,7| AsyncJob.cc(123) callStart: Ssl::PeekingPeerConnector status in: [ FD 12 job194686]
2018/09/04 12:45:58.124 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 0xf67698
2018/09/04 12:45:58.124 kid1| 83,5| PeerConnector.cc(187) negotiate: SSL_connect session=0x122c430
2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 65535 for SBuf6002798
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(886) cow: SBuf6002798 new size:82887
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(857) reAlloc: SBuf6002798 new size: 82887
2018/09/04 12:45:58.124 kid1| 24,9| MemBlob.cc(56) MemBlob: constructed, this=0x1dd27a0 id=blob1555830 reserveSize=82887
2018/09/04 12:45:58.124 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555830 memAlloc: requested=82887, received=82887
2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6002798 new store capacity: 82887
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(139) rawAppendStart: SBuf6002798 start appending up to 65535 bytes
2018/09/04 12:45:58.124 kid1| 83,5| bio.cc(140) read: FD 12 read 0 <= 65535
2018/09/04 12:45:58.124 kid1| 83,5| NegotiationHistory.cc(83) retrieveNegotiatedInfo: SSL connection info on FD 12 SSL version NONE/0.0 negotiated cipher
2018/09/04 12:45:58.124 kid1| ERROR: negotiating TLS on FD 12: error:00000000:lib(0):func(0):reason(0) (5/0/0)
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(256) cbdataInternalAlloc: Allocating 0x110b508
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 0x17c3f18
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 0x17c3f18
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 0x17c3f18
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 0x17c3f18
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(351) cbdataInternalLock: 0x110b508=1
2018/09/04 12:45:58.125 kid1| 83,5| PeerConnector.cc(559) callBack: TLS setup ended for local=10.240.180.31:43674 remote=103.243.13.183:443 FD 12 flags=1
2018/09/04 12:45:58.125 kid1| 5,5| comm.cc(1030) comm_remove_close_handler: comm_remove_close_handler: FD 12, AsyncCall=0x1635fc0*2
2018/09/04 12:45:58.125 kid1| 9,5| AsyncCall.cc(56) cancel: will not call Security::PeerConnector::commCloseHandler [call2844544] because comm_remove_close_handler
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 0x1f6b778
2018/09/04 12:45:58.125 kid1| 17,4| AsyncCall.cc(93) ScheduleCall: PeerConnector.cc(572) will call FwdState::ConnectedToPeer(0x1f6b778, local=10.240.180.31:43674 remote=103.243.13.183:443 FD 12 flags=1, 0x110b508/0x110b508) [call2844542]
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 0xf67698
2018/09/04 12:45:58.125 kid1| 93,5| AsyncJob.cc(139) callEnd: Security::PeerConnector::negotiate() ends job [ FD 12 job194686]
2018/09/04 12:45:58.125 kid1| 83,5| PeerConnector.cc(48) ~PeerConnector: Security::PeerConnector destructed, this=0xf67698
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(383) cbdataInternalUnlock: 0xf67698=2
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(383) cbdataInternalUnlock: 0xf67698=1
2018/09/04 12:45:58.125 kid1| 93,5| AsyncJob.cc(40) ~AsyncJob: AsyncJob destructed, this=0xf67750 type=Ssl::PeekingPeerConnector [job194686]

Again as this is with an explicit CONNECT request, I do get ERR_CANNOT_FORWARD and that error page uses a certificate signed for www.extremetech.com by my internal CA without any thing in SAN field guessing ssl_crtd isn't crashing here unlike the previous bugreport.
Anything from these loglines ?

Regards,
Sarfaraz


-----Original Message-----
From: squid-users <squid-users-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of Amos Jeffries
Sent: Tuesday, September 4, 2018 10:10 AM
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re:  Squid fails to bump where there are too many DNS names in SAN field

On 4/09/18 10:39 AM, Alex Rousskov wrote:
> On 09/03/2018 01:34 AM, Ahmad, Sarfaraz wrote:
> 
>> interception/MITM appears to fail where remote certificates from 
>> origin servers have way too many dnsnames in the SAN field.
>>
>> I have noticed this behavior with at least these 2 websites. In both 
>> the cases, my setup would be bumping the connections.
>>
>> https://www.pcmag.com/
>> https://www.extremetech.com/
> 
>> I will have to file a bug ?
> 

Does it look like a reoccurance of this bug?
 <https://bugs.squid-cache.org/show_bug.cgi?id=3665>

We did not have a concrete confirmation that the exact issue was permanently gone, it may have just been shifted to larger more obscure SAN field values.


Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux