On 2016-07-06 10:48, Moataz Elmasry wrote:
Hi all,
I'm trying to create a kind of captive portal when only my domain and
google play are whitelisted and other addresses(http/https) are
forwarded to my domain.
All http requests are landing fine in the url_rewrite program, while
the https requests appear as only the IP address but not the dns name.
I'm aware of http://wiki.squid-cache.org/Features/SslPeekAndSplice and
especially the note that during ssl_bump no dns name is available yet
and instead one should be using the acl ssl::server_name directive,
but for some reason no https address is being sent to my url_rewrite
program.
The same SSL certificate used on my domain is also being used with
squid at https_port
Here's my squid.conf
"
pinger_enable off
acl localnet src 10.0.0.0/8 [1] # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 [2] # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 [3] # RFC1918 possible internal
network
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl Safe_ports port 1025-65535 # unregistered ports
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
acl http dstdomain play.google.com [4] mydomain.com [5]
acl https ssl::server_name play.google.com [4] mydomain.com [5]
This is ... weird. There is nothing in the ACL matching which would
indicate it was HTTP vs HTTPS.
* dstdomain can match for CONNECT tunnels transferring non-HTTP traffic
when the URI contains the domain specified. It only indicates that HTTP
was used by the client ... except for intercepted HTTPS traffic, where
it merely indicates that Squid itself is wrapping the inbound traffic
into HTTP compatible format before interpreting them. Squid sometimes
uses the TLS SNI value as the URI dstdomain.
-> unreliable.
* TLS SNI can contain the listed server name for non-HTTPS protocols.
-> unreliable.
http_access allow http
http_access allow https
* "http_access" means Squid is testing whether an HTTP protocol client
is allowed to use the proxy. The "http" URL contains HTTP protocol
matching. Which is okay, but see above about what the "dstdomain "value
could be.
* The "https" ACL contains TLS details matching - so is usually not
possible to even test like this.
* localnet and localhost are already allowed to do anything safe by the
earlier http_access rules. I doubt these confused matches are even
getting used.
url_rewrite_program /bin/bash -c -l /etc/squid/redirect.bash
url_rewrite_access allow all !http
url_rewrite_access allow all !https
Several problems here:
* "all" is only a meaningless waste of CPU time and memory in this
usage.
* "https" ACL probably is not possible to match. Rewriting of the *HTTP*
URL is a HTTP decision. Not TLS.
* The use of negation (!) means you have expicitly configured Squid
*not* to send any lookups to the helper when the ACL listed domain
name(s) are present in the HTTP request.
So you were asking why no requests with the domain name show up in the
helper?
Squid is obeying your explicit instructions not to send them.
sslcrtd_program /lib/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
http_access allow all
Not safe.
localnet and localhost are already allowed to do anything safe by the
earlier http_access rules. SO you should not see a change if you set
this back to the "deny all" which it should be.
http_port 3127
http_port 3128 intercept
Not safe practice. Port 3128 is the officialy registered Squid proxy
port and quite well known. There are several attacks that can be done if
the attacker happens to identify what intercept port is numbered and
connect there. Use a randomly selected other port number.
Same for the below 3129. It is used in our documetation as an example
only.
https_port 3129 intercept cert=mycert.cert key=mykey.key ssl-bump
intercept generate-host-certificates=on version=1
options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE cafile=Intermediate.crt
always_direct allow all
always_direct is not needed for SSL-Bump. It was a bug workaround needed
only for a very few releases many years ago now.
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump splice localhost
ssl_bump splice https
You are splicing traffic. This means there are no HTTPS messages
interpreted by Squid. Thus no possibility of your URL-rewrite helper
ever being even considered for use on them.
At best it might be considered for the CONNECT tunnel used by splice,
but that means CONNECT URI has its domain set, the dstdomain would match
and "!http" comes into affect to prevent it being asked.
ssl_bump peek step1
ssl_bump peek all
coredump_dir /var/cache/squid
"
So any idea why no https urls are being redirected to the url_rewrite
program?
Any alternative solution is also very much welcome
1) If you really meant to detect HTTP vs HTTPS traffic. Use the proper
ACL definitions:
acl HTTP proto HTTP
acl HTTPS proto HTTPS
2) Most rewriters cannot correctly handle the URI type used on CONNECT
tunnels, and more importantly are not able to safely decide where to
redirect to even if they could produce the right URI output.
So, normal installations should block requests to your re-writer by
using the available "CONNECT" ACL like so:
url_rewrite_access deny CONNECT
However, if your rewriter is an exception and can actually divert whole
tunnels correctly (or knows corectly to return "ERR" and skip
re-writing). Then use the method field it receives from Squid to have it
decide what to do.
3) If you want to rewrite or redirect https:// URLs ... in other words
modifying the HTTPS messages inside the crypto.
That requires "ssl_bump bump" action to be configured and the traffic
decrypted.
HTH
Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users