Search squid archive

6.x gives frequent connection to peer failed - spurious?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For reasons I won't go into, we are running two copies of squid. One (main squid) is client-facing and uses the other (peer squid) as the upstream cache_peer which is a non-caching fetcher.

Main squid is configured like this:

cache_peer 127.0.0.1 parent 8123 0 no-query no-digest no-netdb-exchange default name=127.0.0.1:8123
cache_peer_access 127.0.0.1:8123 allow all

Peer squid is configured like this:

unique_hostname webfilter.squidnc
pid_filename /var/run/squidnc.pid
http_port 127.0.0.1:8123
icp_port 0
snmp_port 0
no_cache deny all
cache_access_log none
cache_store_log none
cache_log /usr/local/squid/logs/nocache.log
cache_effective_user nobody
cache_effective_group wheel
logfile_rotate 0
http_access allow localhost
http_access deny all
cache_mgr nocache
hosts_file none
cache_mem 10 MB
cache_dir ufs /usr/local/squid/nocache 10 1 1 no-store
always_direct allow all

With 6.x (currently 6.5) there are very frequent (every 10 seconds or so) messages like:
2023/11/10 10:25:43 kid1| ERROR: Connection to 127.0.0.1:8123 failed
    current master transaction: master3692

With 4.x there were no such messages.

By comparing to the peer squid logs, these seems to tally with DNS failures:
peer_select.cc(479) resolveSelected: PeerSelector1688 found all 0 destinations for bugzilla.tucasi.com:443

Full ALL,2 log at the time of the reported connection failure:

2023/11/10 10:25:43.162 kid1| 5,2| TcpAcceptor.cc(214) doAccept: New connection on FD 17 2023/11/10 10:25:43.162 kid1| 5,2| TcpAcceptor.cc(316) acceptNext: connection on conn3 local=127.0.0.1:8123 remote=[::] FD 17 flags=9 2023/11/10 10:25:43.162 kid1| 11,2| client_side.cc(1332) parseHttpRequest: HTTP Client conn13206 local=127.0.0.1:8123 remote=127.0.0.1:57843 FD 147 flags=1 2023/11/10 10:25:43.162 kid1| 11,2| client_side.cc(1336) parseHttpRequest: HTTP Client REQUEST: 2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(707) clientAccessCheckDone: The request CONNECT bugzilla.tucasi.com:443 is ALLOWED; last ACL checked: localhost 2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(683) clientAccessCheck2: No adapted_http_access configuration. default: ALLOW 2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(707) clientAccessCheckDone: The request CONNECT bugzilla.tucasi.com:443 is ALLOWED; last ACL checked: localhost 2023/11/10 10:25:43.162 kid1| 44,2| peer_select.cc(460) resolveSelected: Find IP destination for: bugzilla.tucasi.com:443' via bugzilla.tucasi.com 2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(479) resolveSelected: PeerSelector1526 found all 0 destinations for bugzilla.tucasi.com:443 2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(480) resolveSelected: always_direct = ALLOWED 2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(481) resolveSelected: never_direct = DENIED 2023/11/10 10:25:43.163 kid1| 44,2| peer_select.cc(482) resolveSelected: timedout = 0 2023/11/10 10:25:43.163 kid1| 4,2| errorpage.cc(1397) buildBody: No existing error page language negotiated for ERR_DNS_FAIL. Using default error file. 2023/11/10 10:25:43.163 kid1| 33,2| client_side.cc(617) swanSong: conn13206 local=127.0.0.1:8123 remote=127.0.0.1:57843 flags=1

If my analysis is correct why is this logged as a connection failure and do I need to worry about it beyond it filing up the logs needlessly?

My concern is that this could lead to the parent being incorrectly declared DEAD thus impacting other traffic:

2023/11/09 08:55:22 kid1| Detected DEAD Parent: 127.0.0.1:8123
    current master transaction: master4581234

--
Stephen
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
https://lists.squid-cache.org/listinfo/squid-users



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux