On 09/11/17 11:34, A. Benz wrote:
Hi Amos,
Many thanks for your detailed reply.
I have modified the config following your comments, before you see the
new config (attached below), pls let me know your thoughts on the
following:
1.
> The workarounds and gotcha's listed at
> <https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery> are the
> best you can hope for there. The most successful all-round solution is
> to increase EDNS0 capabilities.
My particular case is a single server only, a corporate email server.
This server is publicly accessible from internet (and has a valid signed
SSL cert), now, on the remote location, there's a VPN setup that
redirects access to the mail server to a private IP, eg 10.x.x.x (and
this differs depending on loadbalance decision).
Do you mean the VPN exit point has that 10/8 IP address? or that the
traffic from the client is altered to be going to that IP before it
reaches Squid?
The latter is broken because it destroys the original dst-IP values on
the TCP connection. Which Squid needs to setup the server connection.
Without squid, I can connect to webmail, but with squid I get the
forgery error. Does the EDNS0 fix this? See its almost working exactly
as I need now, except for access to this single domain.. so if there's a
workaround (even if it requires a recompile) to ignore this single
domain do let me know.
EDNS0 fixes problems with services that load balance by rotating the IP
addresses delivered in response to A/AAAA queries, possibly omitting
some records if the final few don't fit. That results in Squid getting
different IPs occasionally than the one the client is using. EDNS0
extends the available DNS response packet size to fit all the records so
Squid can see them all even when a large set is rotating.
There are a few major hosting providers that have that behaviour in
their DNS. If you have not hit it yet you are very lucky.
2.
> NAT of the dst-IP:port *MUST NOT* happen on any device between the
> client machine and the proxy machine. Squid needs access directly to the
> kernel NAT records of the device doing that NAT operation. So it can
> only happen on the Squid device.
> You must *route* the packets unchanged to the Squid device (possibly
> over a tunnel if necessary).
It happens on the same device (LEDE/OpenWrt router where squid is
running), so the router is configured to intercept http (80) and https
(443) traffic and redirect it to squid's ports:
80 ---> 3129
443 --> 3130
3.
> Rather than allowing unlimited access to anyone on the Internet to use
> your limited bandwidth outbound connection for access to port 443 you
> should be using the localnet ACL that restricts use of the proxy to
> people on your LAN - those 14 clients you mentioned sharing the line.
>
> [NP: It is not possible in this setup to determine what remote users are
> abusing your proxy. All traffic logs from your firewall etc will show
> Squid as the client, not the remote [ab]user. Squid access.log records
> you are sending to /dev/null is the *only* record of such activities.]
>
>
I think I didn't word my earlier email properly, apologies for not being
clear. No one from the internet has access to squid, the listening ports
are not open to public, only accessible from LAN.
If for any reason those firewall rules change in unexpected ways or
don't block something you expect to be blocked this may leave a security
hole open. It does not seem to be necessary really, so best to close.
With abuse I meant the 14 users.. you know nowadays with mobiles/tablets
and all the apps and syncing, I only allow ports 443 and 80 (and those
are intercepted and forwarded to squid). All other ports are blocked.
The bandwidth available is extremely scarce and hence why I'm setting
this up.
The point I was trying to emphasize is that your Squid is accepting
*anything* in those port 443 connections.
4.
> To make your whitelists have any effect replace the above "allow
> ssl_ports" line with a "deny !localnet" line.
When I do this, it doesn't work anymore. I get "Your connection is not
secure" from firefox, and since google has HSTS, I can't "ignore and
proceed". The squid access log shows (not .google.com is in whitelist.txt):
HSTS requires a header "Strict-Transport-Security" to be delivered from
servers before it takes effect. You can erase that header from replies
going through Squid with the reply_header_access directive. Current
Squid should be doing it automatically.
There may be issues with HSTS if the header is received before the
device connects to your network, or if it arrives over an uncontrolled
CONNECT tunnel. But there is not much to be done about those cases.
1510180110.096 0 192.168.1.178 TCP_DENIED/200 0 CONNECT
108.177.14.103:443 - HIER_NONE/- -
Once I switch back to "allow SSL_ports" I can connect (squid splices'
the connection, no complaints from firefox).
That means the server that HTTPS connection is attempting to reach is
not on your whitelist. It is therefore one of the things you wanted to
be blocked according to your stated policy.
- as I mentioned earlier, if that causes any issues your whitelist is
incomplete.
5.
I followed your comments about the config changes: changed acls to match
original config in upper case. Swapped the port numbers, but about
having my ssl-bump match on 3129, pls check my new one see if I did it
right.
## begin squid.conf
acl localnet src 10.0.0.0/8
acl localnet src 172.16.0.0/12
acl localnet src 192.168.0.0/16
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT
acl http_whitelist dstdomain "/etc/squid/whitelist.txt"
acl https_whitelist ssl::server_name "/etc/squid/whitelist.txt"
acl ips_whitelist dst "/etc/squid/ips.txt"
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow SSL_ports
# http_access deny !localnet
http_access allow http_whitelist
http_access allow ips_whitelist
http_access deny all
http_port 3128 ssl-bump \
cert=/etc/squid/myCA.pem \
generate-host-certificates=off dynamic_cert_mem_cache_size=4MB
https_port 3130 intercept ssl-bump \
cert=/etc/squid/myCA.pem \
generate-host-certificates=off dynamic_cert_mem_cache_size=4MB
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump peek step1 all
ssl_bump splice https_whitelist
ssl_bump splice ips_whitelist
ssl_bump terminate all
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
store_miss deny all
cache_log /tmp/squid/squid.log
access_log /tmp/squid/access.log
logfile_rotate 0
logfile_daemon /usr/bin/logger
http_port 3129 intercept
coredump_dir /tmp/squid
visible_hostname LEDE.lan
pinger_enable off
mime_table /tmp/squid/mime.conf
sslcrtd_program /usr/lib/squid/ssl_crtd -s /tmp/squid/ssldb -M 4MB
## end config
Many thanks!
Regards,
A. Benz
On 11/08/17 12:23, Amos Jeffries wrote:
On 08/11/17 12:18, A. Benz wrote:
Hi all,
## Intro
I read many blogs and emails on this list related to what I'm trying
to do, but most go into bumping or do things that are not as simple
as I'm trying to achieve.
I have an extremely slow line, with very high latency in a remote
location. About 14 people are sharing this line. Nowadays with all
the mobile apps trying to sync and such, the line stalls to unusable
all the time.
I tried doing filters with firewall or dns level, but those are not
effective. In the end I figured squid might be my best option.
## End intro
I have squid 3.5.27 running under LEDE (OpenWrt fork), ie its
cross-compiled for a MIPS based SoC (mediatek mt7621). I mention this
because you will see some options in the config file that won't make
sense otherwise.
NP: That should not be making much difference to the squid.conf
settings. The worst limitations such devices impose are things that
should be solved by OS settings outside of squid.conf. eg the
cache.log going to a pipe for remote logging instead of a filename,
and system-level FD limits.
It works great, here's what I'm trying to achieve: Allow access only
to a pre-defined list of websites (whitelist). http is
straightforward, but if the connection is https all I need to know is
domain, if its allowed, let it pass, otherwise terminate.
this setup is working as intended with the config attached below,
however the issue I'm facing is that some servers are "loadbalanced",
this would give me the forgery error, eg:
"SECURITY ALERT: Host header forgery detected on...."
The workarounds and gotcha's listed at
<https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery> are the
best you can hope for there. The most successful all-round solution is
to increase EDNS0 capabilities.
Here's a specific example, there's a corporate domain for webmail
access, and some loadbalance config makes use of different IPs, I
think this is what triggers the error. My question is, can I just
ignore this error somehow and allow the connection? From what I
gather this connection is cut by squid before it reaches the client..
Squid default behaviour is to allow the connection only to the same
IP:port the client was connecting to. If that is not working your
network configuration is screwed up. Specifically your routing or NAT.
NAT of the dst-IP:port *MUST NOT* happen on any device between the
client machine and the proxy machine. Squid needs access directly to
the kernel NAT records of the device doing that NAT operation. So it
can only happen on the Squid device.
You must *route* the packets unchanged to the Squid device (possibly
over a tunnel if necessary).
Also if there's anything else obviously wrong with my setup please
let me know.
Many thanks.
Here's my config:
### squid.conf begin
acl localnet src 10.0.0.0/8
acl localnet src 172.16.0.0/12
acl localnet src 192.168.0.0/16
acl ssl_ports port 443
acl safe_ports port 80
acl safe_ports port 443
acl connect method connect
NP: the default above ACL names are case-sensitive and some of them
involve built-in default values which you are preventing having any
effect by using custom lower-case ACL names.
acl http_whitelist dstdomain "/etc/squid/whitelist.txt"
acl https_whitelist ssl::server_name "/etc/squid/whitelist.txt"
acl ips_whitelist dst "/etc/squid/ips.txt"
http_port 3128 intercept
http_port 3129
Port 3128 is registered for forward-proxy traffic. Ideally you would
have those lines reversed like so:
http_port 3128
http_port 3129 intercept
... with the corresponding NAT change for the intercept port.
Also, to have your SSL-Bump whitelists applied to forward-proxy
CONNECT traffic you should have ssl-bump settings on that 3128
forward-proxy port matching those on the port 3130 line.
http_access deny !safe_ports
http_access deny connect !ssl_ports
http_access allow ssl_ports
Rather than allowing unlimited access to anyone on the Internet to use
your limited bandwidth outbound connection for access to port 443 you
should be using the localnet ACL that restricts use of the proxy to
people on your LAN - those 14 clients you mentioned sharing the line.
[NP: It is not possible in this setup to determine what remote users
are abusing your proxy. All traffic logs from your firewall etc will
show Squid as the client, not the remote [ab]user. Squid access.log
records you are sending to /dev/null is the *only* record of such
activities.]
To make your whitelists have any effect replace the above "allow
ssl_ports" line with a "deny !localnet" line.
If that change causes issues then your whitelists are incorrect /
incomplete. You then need the (currently discarded) access.log and/or
cache.log data to solve the issue properly.
http_access allow http_whitelist
http_access allow ips_whitelist
http_access deny all
https_port 3130 intercept ssl-bump \
cert=/etc/squid/myCA.pem \
generate-host-certificates=off dynamic_cert_mem_cache_size=4MB
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump peek step1 all
ssl_bump splice https_whitelist
ssl_bump splice ips_whitelist
ssl_bump terminate all
That seems fine. The problem is not part of this _config_. If you are
having any SSL-Bump issues please try a build of the latest Squid-4.
It may be related to bugs in Squid-3 SSL-Bump or modern TLS things
Squid-3 cannot cope with - there is a growing list of those.
cache deny all
In the latest Squid-3 use "store_miss deny all" instead of the above.
access_log none
The above is fine if you are certain of the squid.conf working 100%
properly. But since you are debugging issues you may need those
transaction details.
NP: access.log can be logged to syslog or a TCP pipe by Squid. To
deliver the log content externally for normal audit purposes instead
of using space on the device.
cache_log /dev/null
You *need* the information logged here. By default only the most
operationally critical errors are recorded.
NP: the cache.log can usually be a Unix-pipe delivering data to a
remote server if the local machine is constrained.
cache_store_log stdio:/dev/null
Above line is *actively* harmful. The Squid-3 default is not to waste
cycles logging *unless* you enter something like the above in
squid.conf. The above makes Squid allocate device resources to logging
that data to /dev/null.
logfile_rotate 0
logfile_daemon /dev/null
/dev/null is not a valid application filename.
Build your Squid with --disable-logfile-daemon.
coredump_dir /tmp/squid
visible_hostname main_Firewall
The *visible* hostname is the domain delivered to clients and denied
parties in the URLs to fetch error message data and FTP icons from
Squid. It needs to be a valid FQDN.
Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users