The above rules allow abuse of sites matching allowed_sites (by
proxying CONNECT traffic to any port on those sites).
Ok, maybe I'm lost. Any material on the internet I've read about writing
ACLs to allow access on Squid, including the Squid website, follows the
basic structure:
acl rule_name dstdomain "/path/to/file"
http_access allow rule_name
Which is exactly what I posted. Is there something wrong with that?
In fact, I forgot to tell what is my objective in the first post: allow
access to some selected websites and deny all the rest. Simple as that.
They also allow any traffic to SSL_ports of any site. In summary, they
are not much better than allowing all traffic, creating an open proxy
ripe for abuse.
I did mention I was unsure about allowing free access to SSL_ports,
which in my case is only 443 itself.
But my reasoning behind that (and I'm probably wrong) is that only the
allowed sites would be permitted access, then "ssl_bump terminate all"
after splicing step2 would block other CONNECT attempts.
In local tests it *apparently* worked as expected. But I really am
uncertain if that's just wrong for what I'm trying to achieve.
Assuming 'http_access allow SSL_ports' is wrong, removing it takes me
back to the original problem:
The following error was encountered while trying to retrieve the
URL: https://<ip_address>/*
Access Denied.
even if the 'dstdomain' hosted on that IP is allowed by the ACL.
Based on the provided documentation, I understand that is in step2 that
the Client Hello shows more information regarding the server name, I
tried peeking and then splicing on step3 but that also didn't help.
I'll put the full config here so anyone willing to help can better
understand it.
Thanks!
### SQUID.CONF
#
# Recommended minimum configuration:
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network
(LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space
(CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly
plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network
(LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network
(LAN)
acl localnet src fc00::/7 # RFC 4193 local private network
range
acl localnet src fe80::/10 # RFC 4291 link-local (directly
plugged) machines
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
debug_options ALL,0
access_log /var/log/squid/access.log
acl allowed_sites dstdomain "/etc/squid/allowed-sites.txt"
acl spliced_sites ssl::server_name "/etc/squid/allowed-sites.txt"
http_access allow allowed_sites
# This probably should be removed, but it's making it work for this test.
http_access allow SSL_ports
acl step1 at_step SslBump1
acl step2 at_step SslBump2
ssl_bump peek step1 all
ssl_bump splice step2 spliced_sites
ssl_bump terminate all
tls_outgoing_options capath=/etc/pki/tls/certs options=ALL
sslcrtd_program /usr/lib64/squid/security_file_certgen -s
/var/lib/squid/ssl_db -M 8MB
sslcrtd_children 3
#
#=============
#
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localhost
# And finally deny all other access to this proxy
http_access deny all
# Squid normally listens to port 3128
http_port 192.168.10.10:8080
http_port 192.168.10.10:3128 intercept
https_port 192.168.10.10:3129 tls-cert=/etc/squid/ssl/squidCA.pem
tls-key=/etc/squid/ssl/squidCA.key ssl-bump intercept
generate-host-certificates=on dynamic_cert_mem_cache_size=8MB
# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid
#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
Em 28/06/2022 17:05, Alex Rousskov escreveu:
On 6/28/22 14:32, Bruno de Paula Larini wrote:
http_access allow allowed_sites
http_access allow SSL_ports
The above rules allow abuse of sites matching allowed_sites (by
proxying CONNECT traffic to any port on those sites). They also allow
any traffic to SSL_ports of any site. In summary, they are not much
better than allowing all traffic, creating an open proxy ripe for abuse.
Most likely, Squid interpretation of http_access rules significantly
differs from yours -- you probably thought the above rules achieve
some other (desirable) effect. You may need to start from
squid.conf.default rules and studying how http_access rules work in
Squid. Once your interpretation matches Squid's you can advance to
dealing with SslBump complexities; the above problems are not even
related to SslBump.
You may find the following page useful, but I realize that it has a
lot of information irrelevant to your specific use case:
https://wiki.squid-cache.org/SquidFaq/SquidAcl
HTH,
Alex.
On 6/28/22 14:32, Bruno de Paula Larini wrote:
I was already following the provided link for reference.
It seems that splicing on step2 was correct, but in fact there were
other things that I missed.
acl allowed_sites dstdomain "/etc/squid/allowed-sites.txt"
# Creates acl containing domain names for splice.
acl spliced_sites ssl::server_name "/etc/squid/allowed-sites.txt"
http_access allow allowed_sites
# This eliminates the browser error containing the IP from the website.
# >> I don't know if there are caveats for allowing free access to
SSL_ports. <<
http_access allow SSL_ports
acl step1 at_step SslBump1
acl step2 at_step SslBump2
ssl_bump peek step1
ssl_bump splice step2 spliced_sites
# Same effect of 'deny all' for https websites.
ssl_bump terminate all
...
*Apparently* that does it.
If I stated anything wrong, please correct me.
Cheers.
Em 28/06/2022 10:52, Alex Rousskov escreveu:
On 6/28/22 08:08, Bruno de Paula Larini wrote:
I have a pretty simple configuration for website filtering
(intercepted) and ssl_bump, which follows below.
However, for some reason, it seems Squid resolves the website
domain address, then uses the IP to compare with the ACLs.
Most likely, what is actually happening is that Squid does not have
domain information during SslBump step1, and then gets that
information during step2. Squid http_access rules apply to each
SslBump step, so you have to write them accordingly.
Available to Squid information and expected Squid behavior is
documented for each step at the following wiki page. There are bugs
in that algorithm _implementation_, but they are being fixed, and I
am not aware of better docs:
https://wiki.squid-cache.org/Features/SslPeekAndSplice#Processing_steps
HTH,
Alex.
As the IP is not included in the ACL, the access to the website is
denied.
Before that, it already checked for the domain name. I can tell
based on the error from the browser.
I'm using Squid version 5.5.
For example, while trying to open https://repo.maven.apache.org/
(included in the allowed sites), the browser shows the error:
The following error was encountered while trying to retrieve
the URL: https://199.232.192.215/*
Access Denied.
If I replace 'deny all' with 'allow all', the website will open as
expected.
Is there something wrong with my config? I have something similar
running and working on version 4.4 (unless I'm missing something).
I'm still only splicing for now.
Thanks for the help!
### SQUID.CONF
...
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
acl allowed_sites dstdomain "/etc/squid/allowed-sites.txt"
http_access allow allowed_sites
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump splice all
tls_outgoing_options capath=/etc/pki/tls/certs options=ALL
sslcrtd_program /usr/lib64/squid/security_file_certgen -s
/var/lib/squid/ssl_db -M 8MB
sslcrtd_children 3
http_access allow localhost
# And finally deny all other access to this proxy
http_access deny all
# Squid normally listens to port 3128
http_port 192.168.10.10:8080
http_port 192.168.10.10:3128 intercept
https_port 192.168.10.10:3129 tls-cert=/etc/squid/ssl/squidCA.pem
tls-key=/etc/squid/ssl/squidCA.key ssl-bump intercept
generate-host-certificates=on dynamic_cert_mem_cache_size=8MB
...
### IPTABLES
...
iptables -t nat -A PREROUTING -i eth0 -s 192.168.10.0/24 -p tcp
--dport 80 -j REDIRECT --to-port 3128
iptables -t nat -A PREROUTING -i eth0 -s 192.168.10.0/24 -p tcp
--dport 443 -j REDIRECT --to-port 3129
...
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users