Search squid archive

Re: squid 5 and parent peers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/9/21 1:46 PM, Markus Moeller wrote:
> i try to find a way how squid can "route" all Internet
> domains to a default proxy and a subset of well defined domains to the
> "special" proxy (and having  "internal" traffic based on IP ranges go
> direct)

Assuming the latter conditions overwrite the former ones, the part that
remains unclear is what you want Squid to do when the request does not
match any of the three conditions above. For example, consider a request
that uses an IP address as a destination, and that IP address is not in
the "go direct" range, and its reverse DNS lookup is unsuccessful so
there is no "domain" that the proxy selection rules are based on.

Another similar question is what should Squid do with domain names that
do not resolve to an IP address. Since Squid is configured to use parent
proxies, Squid could let those proxies try to resolve the domain name,
blindly assuming that the resolution at a parent proxy will not match
one of the "go direct" IPs (a matches would possibly indicate that the
decision to go to a parent proxy was wrong in the first place!).

The final set of questions deals with HTTPS traffic. For example, if
clients sent HTTPS requests, are you OK with Squid making routing
decisions based on the target of the initial CONNECT request?


> Thank you for spotting the !. I got confused with the combinations of
> the never/always direct statement.

Does your test case work after removing that "!"? If not, please share
the updated debugging snippets.


Thank you,

Alex.


>>> ....
>>> # Example rule allowing access from your local networks.
>>> # Adapt to list your (internal) IP networks from where browsing
>>> # should be allowed
>>> #acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
>>> acl localnet src 10.0.0.0/8             # RFC 1918 local private network
>>> (LAN)
>>> acl localnet src 100.64.0.0/10          # RFC 6598 shared address space
>>> (CGN)
>>> acl localnet src 169.254.0.0/16         # RFC 3927 link-local (directly
>>> plugged) machines
>>> acl localnet src 172.16.0.0/12          # RFC 1918 local private network
>>> (LAN)
>>> acl localnet src 192.168.0.0/16         # RFC 1918 local private network
>>> (LAN)
>>> acl localnet src fc00::/7               # RFC 4193 local private network
>>> range
>>> acl localnet src fe80::/10              # RFC 4291 link-local (directly
>>> plugged) machines
>>>
>>> #acl localdst dst 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
>>> acl localdst dst 10.0.0.0/8             # RFC 1918 local private network
>>> (LAN)
>>> acl localdst dst 100.64.0.0/10          # RFC 6598 shared address space
>>> (CGN)
>>> acl localdst dst 169.254.0.0/16         # RFC 3927 link-local (directly
>>> plugged) machines
>>> acl localdst dst 172.16.0.0/12          # RFC 1918 local private network
>>> (LAN)
>>> acl localdst dst 192.168.0.0/16         # RFC 1918 local private network
>>> (LAN)
>>> acl localdst dst fc00::/7               # RFC 4193 local private network
>>> range
>>> acl localdst dst fe80::/10              # RFC 4291 link-local (directly
>>> plugged) machines
>>>
>>> acl google dstdomain -n .google.com
>>>
>>> cache_peer internetproxy.example.com parent 8080 0 no-query no-digest
>>> no-netdb-exchange default
>>> cache_peer authproxy.example.com parent 8080 0 no-query no-digest
>>> no-netdb-exchange default login=NEGOTIATE auth-no-keytab
>>> # Only google to auth proxy
>>> cache_peer_access authproxy.example.com deny localdst
>>> cache_peer_access authproxy.example.com allow google
>>> cache_peer_access authproxy.example.com deny all
>>> # All other external domains
>>> cache_peer_access internetproxy.example.com deny localdst
>>> cache_peer_access internetproxy.example.com deny google
>>> cache_peer_access internetproxy.example.com allow all
>>> # Local goes direct
>>> always_direct allow localdst
>>> always_direct deny all
>>> never_direct deny !localdst
>>> never_direct allow all
>>>
>>> debug_options 44,10 11,20
>>>
>>> ....
>>>
>>> The first test looked fine:
>>>
>>> #curl -vvv -x http://localhost:3128 http://www.google.com
>>> * Uses proxy env variable no_proxy == 'localhost, 127.0.0.1'
>>> *   Trying 127.0.0.1:3128...
>>> * Connected to localhost (127.0.0.1) port 3128 (#0)
>>>> GET http://www.google.com/ HTTP/1.1
>>>> Host: www.google.com
>>>> User-Agent: curl/7.75.0
>>>> Accept: */*
>>>> Proxy-Connection: Keep-Alive
>>>>
>>> * Mark bundle as not supporting multiuse
>>> < HTTP/1.1 301 Moved Permanently
>>> < Location: https://www.google.com/
>>> < Content-Length: 0
>>> < Date: Sat, 09 Oct 2021 12:29:23 GMT
>>> < X-Cache: MISS from clientproxy
>>> < X-Cache-Lookup: MISS from clientproxy:3128
>>> < Connection: keep-alive
>>> <
>>> * Connection #0 to host localhost left intact
>>>
>>>
>>> Second request failed with a cache error:
>>>
>>>
>>> #curl -vvv -x http://localhost:3128 http://www.google.com
>>> * Uses proxy env variable no_proxy == 'localhost, 127.0.0.1'
>>> *   Trying 127.0.0.1:3128...
>>> * Connected to localhost (127.0.0.1) port 3128 (#0)
>>>> GET http://www.google.com/ HTTP/1.1
>>>> Host: www.google.com
>>>> User-Agent: curl/7.75.0
>>>> Accept: */*
>>>> Proxy-Connection: Keep-Alive
>>>>
>>> * Mark bundle as not supporting multiuse
>>> < HTTP/1.1 503 Service Unavailable
>>> < Server: squid/5.1-VCS
>>> < Mime-Version: 1.0
>>> < Date: Sat, 09 Oct 2021 12:30:27 GMT
>>> < Content-Type: text/html;charset=utf-8
>>> < Content-Length: 3573
>>> < X-Squid-Error: ERR_CONNECT_FAIL 110
>>> < Vary: Accept-Language
>>> < Content-Language: en
>>> < X-Cache: MISS from clientproxy
>>> < X-Cache-Lookup: MISS from clientproxy:3128
>>> < Connection: keep-alive
>>> <
>>> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
>>> http://www.w3.org/TR/html4/strict.dtd>
>>> <html><head>
>>> <meta type="copyright" content="Copyright (C) 1996-2021 The Squid
>>> Software Foundation and contributors">
>>> <meta http-equiv="Content-Type" CONTENT="text/html; charset=utf-8">
>>> <title>ERROR: The requested URL could not be retrieved</title>
>>> .....
>>>
>>>
>>> The cache log says:
>>>
>>> 2021/10/09 13:29:23.520 kid1| 11,2| client_side.cc(1353)
>>> parseHttpRequest: HTTP Client conn10 local=127.0.0.1:3128
>>> remote=127.0.0.1:45192 FD 12 flags=1
>>> 2021/10/09 13:29:23.520 kid1| 11,2| client_side.cc(1354)
>>> parseHttpRequest: HTTP Client REQUEST:
>>> ---------
>>> GET http://www.google.com/ HTTP/1.1
>>> Host: www.google.com
>>> User-Agent: curl/7.75.0
>>> Accept: */*
>>> Proxy-Connection: Keep-Alive
>>>
>>>
>>> ----------
>>> 2021/10/09 13:29:23.520 kid1| 44,3| peer_select.cc(309) peerSelect:
>>> e:=IV/0x12e63f0*2 http://www.google.com/
>>> 2021/10/09 13:29:23.520 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.520 kid1| 44,3| peer_select.cc(612) selectMore: GET
>>> www.google.com
>>> 2021/10/09 13:29:23.520 kid1| 44,3| peer_select.cc(617) selectMore:
>>> direct = DIRECT_UNKNOWN (always_direct to be checked)
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(373)
>>> checkAlwaysDirectDone: DENIED
>>> 2021/10/09 13:29:23.523 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(612) selectMore: GET
>>> www.google.com
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(626) selectMore:
>>> direct = DIRECT_UNKNOWN (never_direct to be checked)
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(345)
>>> checkNeverDirectDone: DENIED
>>> 2021/10/09 13:29:23.523 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(612) selectMore: GET
>>> www.google.com
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(577)
>>> checkNetdbDirect: MY RTT = 0 msec
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(578)
>>> checkNetdbDirect: minimum_direct_rtt = 400 msec
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(585)
>>> checkNetdbDirect: MY hops = 0
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(586)
>>> checkNetdbDirect: minimum_direct_hops = 4
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(647) selectMore:
>>> direct = DIRECT_MAYBE (default)
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(650) selectMore:
>>> direct = DIRECT_MAYBE
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(286)
>>> peerSelectIcpPing: http://www.google.com/
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(297)
>>> peerSelectIcpPing: counted 0 neighbors
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(833)
>>> selectSomeParent: GET www.google.com
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(1098) addSelection:
>>> adding FIRSTUP_PARENT/authproxy.example.com
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(1091) addSelection:
>>> skipping ANY_OLD_PARENT/authproxy.example.com; have
>>> FIRSTUP_PARENT/authproxy.example.com
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(1091) addSelection:
>>> skipping DEFAULT_PARENT/authproxy.example.com; have
>>> FIRSTUP_PARENT/authproxy.example.com
>>> 2021/10/09 13:29:23.523 kid1| 44,3| peer_select.cc(1098) addSelection:
>>> adding HIER_DIRECT#www.google.com
>>> 2021/10/09 13:29:23.523 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.523 kid1| 44,2| peer_select.cc(460) resolveSelected:
>>> Find IP destination for: http://www.google.com/' via
>>> authproxy.example.com
>>> 2021/10/09 13:29:23.523 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.523 kid1| 44,2| peer_select.cc(1171) handlePath:
>>> PeerSelector1 found conn11 local=0.0.0.0 remote=10.20.1.1:8080
>>> FIRSTUP_PARENT flags=1, destination #1 for http://www.google.com/
>>> 2021/10/09 13:29:23.523 kid1| 44,2| peer_select.cc(1177) handlePath:
>>> always_direct = DENIED
>>> 2021/10/09 13:29:23.523 kid1| 44,2| peer_select.cc(1178) handlePath:
>>> never_direct = DENIED
>>> 2021/10/09 13:29:23.523 kid1| 44,2| peer_select.cc(1179) handlePath:
>>> timedout = 0
>>> 2021/10/09 13:29:23.523 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.523 kid1| 11,7| HttpRequest.cc(468) clearError: old:
>>> ERR_NONE
>>> 2021/10/09 13:29:23.524 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.524 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.524 kid1| 44,2| peer_select.cc(460) resolveSelected:
>>> Find IP destination for: http://www.google.com/' via www.google.com
>>> 2021/10/09 13:29:23.524 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.524 kid1| 44,2| peer_select.cc(1171) handlePath:
>>> PeerSelector1 found conn12 local=0.0.0.0 remote=172.217.23.100:80
>>> HIER_DIRECT flags=1, destination #2 for http://www.google.com/
>>> 2021/10/09 13:29:23.524 kid1| 44,2| peer_select.cc(1177) handlePath:
>>> always_direct = DENIED
>>> 2021/10/09 13:29:23.524 kid1| 44,2| peer_select.cc(1178) handlePath:
>>> never_direct = DENIED
>>> 2021/10/09 13:29:23.524 kid1| 44,2| peer_select.cc(1179) handlePath:
>>> timedout = 0
>>> 2021/10/09 13:29:23.524 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.524 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.524 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.524 kid1| 44,2| peer_select.cc(479) resolveSelected:
>>> PeerSelector1 found all 2 destinations for http://www.google.com/
>>> 2021/10/09 13:29:23.524 kid1| 44,2| peer_select.cc(480) resolveSelected:
>>> always_direct = DENIED
>>> 2021/10/09 13:29:23.524 kid1| 44,2| peer_select.cc(481) resolveSelected:
>>> never_direct = DENIED
>>> 2021/10/09 13:29:23.524 kid1| 44,2| peer_select.cc(482) resolveSelected:
>>> timedout = 0
>>> 2021/10/09 13:29:23.524 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector1
>>> 2021/10/09 13:29:23.524 kid1| 44,3| peer_select.cc(241) ~PeerSelector:
>>> http://www.google.com/
>>> 2021/10/09 13:29:23.526 kid1| 11,4| HttpRequest.cc(453) prepForPeering:
>>> 0x1154cf0 to authproxy.example.com proxy
>>> 2021/10/09 13:29:23.526 kid1| 11,3| http.cc(2486) httpStart: GET
>>> http://www.google.com/
>>> 2021/10/09 13:29:23.527 kid1| 11,5| http.cc(87) HttpStateData:
>>> HttpStateData 0x12e9988 created
>>> 2021/10/09 13:29:23.527 kid1| 11,5| http.cc(2367) sendRequest: conn13
>>> local=10.10.1.1:36928 remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13
>>> flags=1, request 0x1154cf0*6, this 0x12e9988.
>>> 2021/10/09 13:29:23.527 kid1| 11,5| AsyncCall.cc(29) AsyncCall: The
>>> AsyncCall HttpStateData::httpTimeout constructed, this=0x12e8920
>>> [call65]
>>> 2021/10/09 13:29:23.527 kid1| 11,8| http.cc(1656)
>>> maybeMakeSpaceAvailable: may read up to 65536 bytes info buf(0/65536)
>>> from conn13 local=10.10.1.1:36928 remote=10.20.1.1:8080 FIRSTUP_PARENT
>>> FD 13 flags=1
>>> 2021/10/09 13:29:23.527 kid1| 11,5| AsyncCall.cc(29) AsyncCall: The
>>> AsyncCall HttpStateData::readReply constructed, this=0x12f9c10 [call66]
>>> 2021/10/09 13:29:23.527 kid1| 11,5| AsyncCall.cc(29) AsyncCall: The
>>> AsyncCall HttpStateData::wroteLast constructed, this=0x12f9cc0 [call67]
>>> 2021/10/09 13:29:23.527 kid1| 11,8| http.cc(2309) decideIfWeDoRanges:
>>> decideIfWeDoRanges: range specs: 0, cachable: 1; we_do_ranges: 0
>>> 2021/10/09 13:29:23.527 kid1| 11,5| http.cc(2113)
>>> copyOneHeaderFromClientsideRequestToUpstreamRequest:
>>> httpBuildRequestHeader: User-Agent: curl/7.75.0
>>> 2021/10/09 13:29:23.527 kid1| 11,5| http.cc(2113)
>>> copyOneHeaderFromClientsideRequestToUpstreamRequest:
>>> httpBuildRequestHeader: Accept: */*
>>> 2021/10/09 13:29:23.527 kid1| 11,5| http.cc(2113)
>>> copyOneHeaderFromClientsideRequestToUpstreamRequest:
>>> httpBuildRequestHeader: Proxy-Connection: Keep-Alive
>>> 2021/10/09 13:29:23.527 kid1| 11,5| http.cc(2113)
>>> copyOneHeaderFromClientsideRequestToUpstreamRequest:
>>> httpBuildRequestHeader: Host: www.google.com
>>> 2021/10/09 13:29:23.527 kid1| 11,5| peer_proxy_negotiate_auth.cc(539)
>>> peer_proxy_negotiate_auth: Import gss name
>>> 2021/10/09 13:29:23.527 kid1| 11,5| peer_proxy_negotiate_auth.cc(546)
>>> peer_proxy_negotiate_auth: Initialize gss security context
>>> 2021/10/09 13:29:23.531 kid1| 11,5| peer_proxy_negotiate_auth.cc(560)
>>> peer_proxy_negotiate_auth: Got token with length 2568
>>> 2021/10/09 13:29:23.531 kid1| 11,2| http.cc(2442) sendRequest: HTTP
>>> Server conn13 local=10.10.1.1:36928 remote=10.20.1.1:8080 FIRSTUP_PARENT
>>> FD 13 flags=1
>>> 2021/10/09 13:29:23.531 kid1| 11,2| http.cc(2443) sendRequest: HTTP
>>> Server REQUEST:
>>> ---------
>>> GET http://www.google.com/ HTTP/1.1
>>> User-Agent: curl/7.75.0
>>> Accept: */*
>>> Host: www.google.com
>>> Proxy-Authorization: Negotiate YIIK....
>>> Cache-Control: max-age=259200
>>> Connection: keep-alive
>>>
>>>
>>> ----------
>>> 2021/10/09 13:29:23.531 kid1| 11,5| AsyncCall.cc(96) ScheduleCall:
>>> IoCallback.cc(131) will call HttpStateData::wroteLast(conn13
>>> local=10.10.1.1:36928 remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13
>>> flags=1, data=0x12e9988) [call67]
>>> 2021/10/09 13:29:23.531 kid1| 11,5| AsyncCallQueue.cc(59) fireNext:
>>> entering HttpStateData::wroteLast(conn13 local=10.10.1.1:36928
>>> remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13 flags=1, data=0x12e9988)
>>> 2021/10/09 13:29:23.531 kid1| 11,5| AsyncCall.cc(41) make: make call
>>> HttpStateData::wroteLast [call67]
>>> 2021/10/09 13:29:23.531 kid1| 11,5| AsyncJob.cc(122) callStart:
>>> HttpStateData status in: [ job8]
>>> 2021/10/09 13:29:23.531 kid1| 11,5| http.cc(1667) wroteLast: conn13
>>> local=10.10.1.1:36928 remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13
>>> flags=1: size 3611: errflag 0.
>>> 2021/10/09 13:29:23.531 kid1| 11,5| AsyncCall.cc(29) AsyncCall: The
>>> AsyncCall HttpStateData::httpTimeout constructed, this=0xe34fa0 [call69]
>>> 2021/10/09 13:29:23.531 kid1| 11,5| AsyncJob.cc(153) callEnd:
>>> HttpStateData status out: [ job8]
>>> 2021/10/09 13:29:23.531 kid1| 11,5| AsyncCallQueue.cc(61) fireNext:
>>> leaving HttpStateData::wroteLast(conn13 local=10.10.1.1:36928
>>> remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13 flags=1, data=0x12e9988)
>>> 2021/10/09 13:29:23.615 kid1| 11,5| AsyncCall.cc(96) ScheduleCall:
>>> IoCallback.cc(131) will call HttpStateData::readReply(conn13
>>> local=10.10.1.1:36928 remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13
>>> flags=1, data=0x12e9988) [call66]
>>> 2021/10/09 13:29:23.615 kid1| 11,5| AsyncCallQueue.cc(59) fireNext:
>>> entering HttpStateData::readReply(conn13 local=10.10.1.1:36928
>>> remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13 flags=1, data=0x12e9988)
>>> 2021/10/09 13:29:23.615 kid1| 11,5| AsyncCall.cc(41) make: make call
>>> HttpStateData::readReply [call66]
>>> 2021/10/09 13:29:23.615 kid1| 11,5| AsyncJob.cc(122) callStart:
>>> HttpStateData status in: [ job8]
>>> 2021/10/09 13:29:23.615 kid1| 11,5| http.cc(1215) readReply: conn13
>>> local=10.10.1.1:36928 remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13 flags=1
>>> 2021/10/09 13:29:23.615 kid1| ctx: enter level  0:
>>> 'http://www.google.com/'
>>> 2021/10/09 13:29:23.615 kid1| 11,3| http.cc(666) processReplyHeader:
>>> processReplyHeader: key '0200000000000000843D000001000000'
>>> 2021/10/09 13:29:23.615 kid1| 11,2| http.cc(720) processReplyHeader:
>>> HTTP Server conn13 local=10.10.1.1:36928 remote=10.20.1.1:8080
>>> FIRSTUP_PARENT FD 13 flags=1
>>> 2021/10/09 13:29:23.615 kid1| 11,2| http.cc(721) processReplyHeader:
>>> HTTP Server RESPONSE:
>>> ---------
>>> HTTP/1.1 301 Moved Permanently
>>> Location: https://www.google.com/
>>> Content-Length: 0
>>> Proxy-Connection: Keep-Alive
>>>
>>> ----------
>>> 2021/10/09 13:29:23.616 kid1| 11,5| Client.cc(119) setVirginReply:
>>> 0x12e9988 setting virgin reply to 0x12fa850
>>> 2021/10/09 13:29:23.616 kid1| ctx: exit level  0
>>> 2021/10/09 13:29:23.616 kid1| 11,5| Client.cc(973) adaptOrFinalizeReply:
>>> adaptationAccessCheckPending=0
>>> 2021/10/09 13:29:23.616 kid1| 11,5| Client.cc(139) setFinalReply:
>>> 0x12e9988 setting final reply to 0x12fa850
>>> 2021/10/09 13:29:23.616 kid1| ctx: enter level  0:
>>> 'http://www.google.com/'
>>> 2021/10/09 13:29:23.616 kid1| 11,3| http.cc(979) haveParsedReplyHeaders:
>>> HTTP CODE: 301
>>> 2021/10/09 13:29:23.616 kid1| 11,3| http.cc(1054)
>>> haveParsedReplyHeaders: decided: do not cache but share because refresh
>>> check returned non-cacheable; HTTP status 301 e:=p2XIV/0x12e63f0*3
>>> 2021/10/09 13:29:23.616 kid1| ctx: exit level  0
>>> 2021/10/09 13:29:23.616 kid1| 11,2| Stream.cc(279) sendStartOfMessage:
>>> HTTP Client conn10 local=127.0.0.1:3128 remote=127.0.0.1:45192 FD 12
>>> flags=1
>>> 2021/10/09 13:29:23.616 kid1| 11,2| Stream.cc(280) sendStartOfMessage:
>>> HTTP Client REPLY:
>>> ---------
>>> HTTP/1.1 301 Moved Permanently
>>> Location: https://www.google.com/
>>> Content-Length: 0
>>> Date: Sat, 09 Oct 2021 12:29:23 GMT
>>> X-Cache: MISS from clientproxy
>>> X-Cache-Lookup: MISS from clientproxy:3128
>>> Connection: keep-alive
>>>
>>>
>>> ----------
>>> 2021/10/09 13:29:23.616 kid1| 11,5| http.cc(1491) processReplyBody:
>>> adaptationAccessCheckPending=0
>>> 2021/10/09 13:29:23.616 kid1| 11,3| http.cc(1154) persistentConnStatus:
>>> conn13 local=10.10.1.1:36928 remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13
>>> flags=1 eof=0
>>> 2021/10/09 13:29:23.616 kid1| 11,5| http.cc(1174) persistentConnStatus:
>>> persistentConnStatus: content_length=0
>>> 2021/10/09 13:29:23.616 kid1| 11,5| http.cc(1178) persistentConnStatus:
>>> persistentConnStatus: clen=0
>>> 2021/10/09 13:29:23.616 kid1| 11,5| http.cc(1537) processReplyBody:
>>> processReplyBody: COMPLETE_PERSISTENT_MSG from conn13
>>> local=10.10.1.1:36928 remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13 flags=1
>>> 2021/10/09 13:29:23.616 kid1| 11,5| Client.cc(162) serverComplete:
>>> serverComplete 0x12e9988
>>> 2021/10/09 13:29:23.616 kid1| 11,5| Client.cc(184) serverComplete2:
>>> serverComplete2 0x12e9988
>>> 2021/10/09 13:29:23.616 kid1| 11,5| Client.cc(212) completeForwarding:
>>> completing forwarding for 0x12e6e28*2
>>> 2021/10/09 13:29:23.616 kid1| 11,5| Client.cc(586) cleanAdaptation:
>>> cleaning ICAP; ACL: 0
>>> 2021/10/09 13:29:23.616 kid1| 11,5| http.cc(134) ~HttpStateData:
>>> HttpStateData 0x12e9988 destroyed;
>>> 2021/10/09 13:29:23.616 kid1| 11,5| AsyncCallQueue.cc(61) fireNext:
>>> leaving HttpStateData::readReply(conn13 local=10.10.1.1:36928
>>> remote=10.20.1.1:8080 FIRSTUP_PARENT FD 13 flags=1, data=0x12e9988)
>>> 2021/10/09 13:29:27.287 kid1| 11,2| client_side.cc(1353)
>>> parseHttpRequest: HTTP Client conn15 local=127.0.0.1:3128
>>> remote=127.0.0.1:45219 FD 12 flags=1
>>> 2021/10/09 13:29:27.287 kid1| 11,2| client_side.cc(1354)
>>> parseHttpRequest: HTTP Client REQUEST:
>>> ---------
>>> GET http://www.google.com/ HTTP/1.1
>>> Host: www.google.com
>>> User-Agent: curl/7.75.0
>>> Accept: */*
>>> Proxy-Connection: Keep-Alive
>>>
>>>
>>> ----------
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(309) peerSelect:
>>> e:=IV/0x12e63f0*2 http://www.google.com/
>>> 2021/10/09 13:29:27.287 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector2
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(612) selectMore: GET
>>> www.google.com
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(617) selectMore:
>>> direct = DIRECT_UNKNOWN (always_direct to be checked)
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(373)
>>> checkAlwaysDirectDone: DENIED
>>> 2021/10/09 13:29:27.287 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector2
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(612) selectMore: GET
>>> www.google.com
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(626) selectMore:
>>> direct = DIRECT_UNKNOWN (never_direct to be checked)
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(345)
>>> checkNeverDirectDone: DENIED
>>> 2021/10/09 13:29:27.287 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector2
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(612) selectMore: GET
>>> www.google.com
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(577)
>>> checkNetdbDirect: MY RTT = 1 msec
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(578)
>>> checkNetdbDirect: minimum_direct_rtt = 400 msec
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(644) selectMore:
>>> direct = DIRECT_YES (checkNetdbDirect)
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(650) selectMore:
>>> direct = DIRECT_YES
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(1098) addSelection:
>>> adding HIER_DIRECT#www.google.com
>>> 2021/10/09 13:29:27.287 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector2
>>> 2021/10/09 13:29:27.287 kid1| 44,2| peer_select.cc(460) resolveSelected:
>>> Find IP destination for: http://www.google.com/' via www.google.com
>>> 2021/10/09 13:29:27.287 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector2
>>> 2021/10/09 13:29:27.287 kid1| 44,2| peer_select.cc(1171) handlePath:
>>> PeerSelector2 found conn16 local=0.0.0.0 remote=172.217.23.100:80
>>> HIER_DIRECT flags=1, destination #1 for http://www.google.com/
>>> 2021/10/09 13:29:27.287 kid1| 44,2| peer_select.cc(1177) handlePath:
>>> always_direct = DENIED
>>> 2021/10/09 13:29:27.287 kid1| 44,2| peer_select.cc(1178) handlePath:
>>> never_direct = DENIED
>>> 2021/10/09 13:29:27.287 kid1| 44,2| peer_select.cc(1179) handlePath:
>>> timedout = 0
>>> 2021/10/09 13:29:27.287 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector2
>>> 2021/10/09 13:29:27.287 kid1| 11,7| HttpRequest.cc(468) clearError: old:
>>> ERR_NONE
>>> 2021/10/09 13:29:27.287 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector2
>>> 2021/10/09 13:29:27.287 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector2
>>> 2021/10/09 13:29:27.287 kid1| 44,2| peer_select.cc(479) resolveSelected:
>>> PeerSelector2 found all 1 destinations for http://www.google.com/
>>> 2021/10/09 13:29:27.287 kid1| 44,2| peer_select.cc(480) resolveSelected:
>>> always_direct = DENIED
>>> 2021/10/09 13:29:27.287 kid1| 44,2| peer_select.cc(481) resolveSelected:
>>> never_direct = DENIED
>>> 2021/10/09 13:29:27.287 kid1| 44,2| peer_select.cc(482) resolveSelected:
>>> timedout = 0
>>> 2021/10/09 13:29:27.287 kid1| 44,7| peer_select.cc(1149)
>>> interestedInitiator: PeerSelector2
>>> 2021/10/09 13:29:27.287 kid1| 44,3| peer_select.cc(241) ~PeerSelector:
>>> http://www.google.com/
>>> 2021/10/09 13:30:27.421 kid1| 11,2| Stream.cc(279) sendStartOfMessage:
>>> HTTP Client conn15 local=127.0.0.1:3128 remote=127.0.0.1:45219 FD 12
>>> flags=1
>>> 2021/10/09 13:30:27.421 kid1| 11,2| Stream.cc(280) sendStartOfMessage:
>>> HTTP Client REPLY:
>>> ---------
>>> HTTP/1.1 503 Service Unavailable
>>> Server: squid/5.1-VCS
>>> Mime-Version: 1.0
>>> Date: Sat, 09 Oct 2021 12:30:27 GMT
>>> Content-Type: text/html;charset=utf-8
>>> Content-Length: 3573
>>> X-Squid-Error: ERR_CONNECT_FAIL 110
>>> Vary: Accept-Language
>>> Content-Language: en
>>> X-Cache: MISS from clientproxy
>>> X-Cache-Lookup: MISS from clientproxy:3128
>>> Connection: keep-alive
>>>
>>>
>>> ----------
>>>
>>>
>>>
>>>
>>>
>>>
>>> Thank you
>>> Markus
>>>
>>>
>>>
>>>
>>>
>>> "Markus Moeller"  wrote in message news:sjrrhc$lat$1@xxxxxxxxxxxxx...
>>>
>>> I understand now better the concept.
>>>
>>> Thank you
>>> Markus
>>>
>>>
>>
> 
> Markus
> 
> _______________________________________________
> squid-users mailing list
> squid-users@xxxxxxxxxxxxxxxxxxxxx
> http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux