Search squid archive

Re: IP detection when using SSL/HTTPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/05/11 22:55, Stefan Baur wrote:
Hi list,

I have been using the following squid.conf snippet for a while:

#----------------------------
acl thisisanip url_regex ^[^:]*://([^/@]*@)?[0-9\.]*(:|/|$|\?) ^[0-9\.]*$

acl whitelist dstdomain "/etc/squid/whitelist.txt"
acl whitelist_ip dst "/etc/squid/whitelist_ip.txt"

#Check IP Whitelist
http_access allow thisisanip whitelist_ip
http_access deny thisisanip

#Check Domain Whitelist
http_access allow whitelist

# And finally deny all other access to this proxy
http_access deny all
#----------------------------

I believe the url_regex snippet was even provided by Henrik in
<http://www.mail-archive.com/squid-users@xxxxxxxxxxxxxxx/msg26777.html>

The reason for adding the thisisanip acl was that squid took a loooooong
time accessing IPs.
I'm not*exactly* sure why, but I believe squid tries a reverse DNS
lookup for each IP and tries to compare the result with the names listed
in the domain-name-based whitelist, which is time-consuming, especially
if there is no name associated with the IP in question.
With the above setup, squid will check:
1) a) it is an IP and 1) b) it is in the whitelist ==>Allow, no need for
DNS lookups
2) it is an IP ==> since it wasn't in the allowed list from above, deny
it, no need for DNS lookups
3) it is a domain listed in the whitelist ==> Allow
4) catch-all ==> Deny

This has worked like a charm so far, but now I am running into the issue
that I need SSL/HTTPS connects to IPs.
When using SSL/HTTPS, url_regex doesn't work.

Any suggestions how I can emulate that behavior?

The ^[0-9\.]*$ is not quite correct for HTTPS/CONNECT. It needs to account for :port as well as IP.

Already or soon you will also be seeing IPv6 requests.

This is the current-day version of what Henrik posted:

acl thisisanip url_regex -i
  ^[^:]*://([^/@]*@)?\[?[0-9\.:a-f]*(\]|/|$|\?)
  ^[0-9\.:a-f]*$

(line insisted on wrapping so I've manually wrapped it at the only actual whitespace).

I understand that url_regex'ing is not supported because the URL may
contain sensitive information and/or is encrypted, and that's a Good
Thing [TM] - but I wouldn't need the entire URL anyway, just the host part.

- "sensitive information" claims are FUD. It is only (partially) relevant when logging the URL. And then only if the login password is sent in the clear.

- URL being encrypted inside HTTPS is only a problem if you are matching the path section (urlpath_regex). domain:port are still there.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux