Search squid archive

Re: Questions about Squid configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024-09-05 04:58, にば wrote:

I took the advice I received, reviewed the verification details, and
verified again with the two recommended steps.
The new verification includes the following four patterns:
1. successful communication of a valid request to an allowed site
[command]
curl https://pypi.org/ -v --cacert squid.crt -k

2. communication of invalid requests to allowed sites is denied
[command] ※Invalid header host
curl https://pypi.org/ -H "Host: www.yahoo.co.jp" -v --cacert squid.crt -k

3. communication of valid requests to prohibited sites is denied
[command]
curl https://www.yahoo.co.jp/ -v --cacert squid.crt -k

4. communication of invalid requests to prohibited sites is denied
[command]
curl https://www.yahoo.co.jp/ -H "Host:" -v --cacert squid.crt -k


Thank you for following my recommendations, documenting your tests, and sharing configurations!

Do you run these "curl" commands on the same box that runs Squid?


STEP1
>1. Remove all of your current http_access rules. Keep ACLs. Perform
>host_verify_strict and access tests to confirm that all valid requests
>are denied and all invalid requests are rejected. If necessary, ask
>questions, file bug reports, patch Squid, and/or adjust your
>configuration to pass this test.

・Results for validation patterns
1.403 Forbidden、X-Squid-Error: ERR_ACCESS_DENIED 0
2.403 Forbidden、X-Squid-Error: ERR_ACCESS_DENIED 0
3.403 Forbidden、X-Squid-Error: ERR_ACCESS_DENIED 0
4.403 Forbidden、X-Squid-Error: ERR_ACCESS_DENIED 0

I expected to get a different response for a valid request than for an invalid request, is this result correct?

AFAICT, the above results are correct.

Your expectations are reasonable, but you are thinking in terms of plain HTTP while your Squid is configured to bump intercepted HTTPS connections. In your "STEP 1" configuration, Squid does not see the HTTP request at all! Squid generates a fake CONNECT request using TCP/IP-level information from the intercepted TCP connection (and denies that generated request because there are no http_access rules to allow it).

When I was formulating my expectations, I was thinking in terms of HTTP requests as well. That is why I expected invalid requests to be rejected (by host_verify_strict) rather than denied (by http_access). Interception with SslBump makes everything more complex by adding another layer of fake CONNECT requests...

Let's consider this step 1 validation successfully completed.


STEP2
>2. Copy http_access rules, with comments, from generated
>squid.conf.default. Insert your own access rules in the location marked
>by "INSERT YOUR OWN RULE(S) HERE" comment. Perform host_verify_strict
>and access tests to confirm that all valid requests to banned sites are
>denied, all other valid requests are allowed, and all invalid requests
>are rejected. If necessary, ask questions, file bug reports, patch
>Squid, and/or adjust your configuration to pass this test.

・Results for validation patterns
1.200 OK
2.409 Conflict、X-Squid-Error: ERR_CONFLICT_HOST 0
3.409 Conflict、X-Squid-Error: ERR_CONFLICT_HOST 0
4.200 OK


・4. still returns 200 OK. Is this due to an error in the existing
configuration? Or do I need to add a new setting?

I believe Test 4 does not result in ERR_CONFLICT_HOST because Squid does not consider empty Host headers invalid from host header validation point of view: As we discussed earlier, "valueless or missing Host header disables all checks".

If you do consider requests with valueless or missing Host header invalid, then you need to add a custom "http_access deny" rule that would ban them. Look for "req_header" discussion in my earlier answer for (untested) hints about detecting requests with a valueless Host header.

However, you may want to double check whether rejecting requests with an empty Host header is actually necessary in your environment. Perhaps they can be considered valid (which is what Squid does by default)?


My primary concern here is that test 4 request was not blocked by an "http_access deny" rule. I suspect that happens because the following allow rule matched:

    acl https_port port 443
    http_access allow https_port

I recommend deleting the above http_access rule. AFAICT, you only want to allow valid requests targeting specific/allowed sites. You already have other rules for that. The above "all HTTPS" rule is too broad and is seemingly unnecessary.

I also recommend deleting a similar rule that allows all port-80 requests, for similar reasons:

    acl http_port port 80
    http_access allow http_port


If you think you do need those two broad rules, please clarify what you think you need them for. In other words, what tests would break if you remove them?


HTH,

Alex.




2024年8月30日(金) 22:27 Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx>:

On 2024-08-29 22:28, にば wrote:

With the newly reviewed configuration in the attachment

OT: Please note that your configuration does not follow the recommended
http_access rules order template in squid.conf.default and might,
depending on your deployment environment, allow Squid to be used for
attacks on 3rd party resources (e.g., ssh services). This observation is
not related to your primary question and your "ban certain sites" goal.
Following suggestions at the end of this email should fix this problem.


I found the following statement in the following official document
https://www.squid-cache.org/Doc/config/host_verify_strict/

> * The host names (domain or IP) must be identical,
> but valueless or missing Host header disables all checks.

So I ran an additional validation with an empty Host value, and the
request succeeded for a domain that was not in the whitelist.
The curl command for verification is below, and as before, only
.pypi.org is allowed in the whitelist.

date;curl https://www.yahoo.co.jp/ -H "Host:" -v --cacert squid.crt -k

Is it possible for Squid to prevent such requests as well?

Yes, a req_header ACL should be able to detect such requests (i.e.
requests without a Host header or with an empty Host header value).
However, I suspect that "missing Host" is _not_ the problem you should
be solving (as detailed below).


I was able to confirm that any one of the SNI, IP, or Host in the
request is incorrect (not whitelist allowed)
and Squid will correctly check and return a 409 Conflict.

IMHO, you should target/validate a different set of goals/conditions:

* A valid request targeting a banned site should be denied (HTTP 403
response, %Ss=TCP_DENIED, %err_code=ERR_ACCESS_DENIED). This denial
should be triggered by an "http_access deny" rule, preferably an
explicit one. This denial will _not_ happen (and the request will
instead be forwarded to the banned site it targets) if you replace all
your http_access rules with a single "http_access allow all" line. This
denial does not depend on host_verify_strict and underlying code.

* An invalid request should be rejected (HTTP 4xx response). This
includes, but is not limited to, host_verify_strict-driven rejections.
This rejection should happen even if you replace all your http_access
rules with a single "http_access allow all" line.

AFAICT, your current configuration does not reach "deny valid requests
targeting banned sites" goal while your question implies that you are
incorrectly relying on host_verify_strict to perform that denial.


I recommend the following:

1. Remove all of your current http_access rules. Keep ACLs. Perform
host_verify_strict and access tests to confirm that all valid requests
are denied and all invalid requests are rejected. If necessary, ask
questions, file bug reports, patch Squid, and/or adjust your
configuration to pass this test.

2. Copy http_access rules, with comments, from generated
squid.conf.default. Insert your own access rules in the location marked
by "INSERT YOUR OWN RULE(S) HERE" comment. Perform host_verify_strict
and access tests to confirm that all valid requests to banned sites are
denied, all other valid requests are allowed, and all invalid requests
are rejected. If necessary, ask questions, file bug reports, patch
Squid, and/or adjust your configuration to pass this test.


HTH,

Alex.


2024年8月8日(木) 21:33 Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx>:

On 2024-08-06 20:59, にば wrote:

When using Squid transparently, is it possible to control the
whitelist of the domain to connect to and inspect the Host field in
the request header together?

Short answer: Yes.


According to the verification results, the Host field can be inspected
by "host_verify_strict on" in squid-transparent.conf, but it seems
that the whitelist is not controlled.

AFAICT, the configuration you have shared allows all banned[1] traffic
to/through https_port. For the problematic test case #5:

All these http_access rules do _not_ match:

http_access allow localnet whitelist
http_access deny localnet whitelist_https !https_port
http_access deny localnet whitelist_transparent_https !https_port


And then this next rule matches and allows traffic through:

http_access allow https_port


This last http_access rule is not reached:

http_access deny all


N.B. The above analysis assumes that your https_port ACL is explicitly
defined in your squid.conf to match all traffic received at https_port.
If you do not have such an ACL defined, then you need to fix that
problem as well. I recommend naming ACLs differently from directive
names (e.g., "toHttpsPort" rather than "https_port").


Please note that Squid v4 is not supported by the Squid Project and is
very buggy. I recommend using Squid v6 or later.


HTH,

Alex.
[1] Here, "banned" means "_not_ matching whitelist ACL".


■Configuration Details
〇squid-transparent.conf(Excerpts)
#Whitelist
acl whitelist dstdomain "/etc/squid/whitelist"
acl whitelist dstdomain "/etc/squid/whitelist_transparent"
acl whitelist_https dstdomain "/etc/squid/whitelist_https"
acl whitelist_transparent_https dstdomain
"/etc/squid/whitelist_transparent_https"

proxy_protocol_access allow localnet
proxy_protocol_access deny all
http_access allow localnet whitelist
http_access deny localnet whitelist_https !https_port
http_access deny localnet whitelist_transparent_https !https_port

# Handling HTTP requests
http_port 3129 intercept
# Handling HTTPS requests
https_port 3130 intercept tcpkeepalive=60,30,3 ssl-bump
generate-host-certificates=on dynamic_cert_mem_cache_size=20MB
tls-cert=/etc/squid/ssl/squid.crt tls-key=/etc/squid/ssl/squid.key
cipher=HIGH:MEDIUM:!LOW:!RC4:!SEED:!IDEA:!3DES:!MD5:!EXP:!PSK:!DSS
options=NO_TLSv1,NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
tls-dh=prime256v1:/etc/squid/ssl/bump_dhparam.pem
# Start up for squid process
http_port 3131
http_access allow https_port
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist"
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist_transparent"
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist_https"
acl allowed_https_sites ssl::server_name
"/etc/squid/whitelist_transparent_https"

http_access deny all

# strict setting
host_verify_strict on

# SSL_BUMP
sslcrtd_program /usr/lib64/squid/security_file_certgen -s
/var/lib/squid/ssl_db -M 20MB
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

ssl_bump bump all


■Verification of Settings
I ran the curl command from each of the client environments that use Squid.
1. if SNI, Destination IP, and HeaderHost are correct, the user should
be able to connect to pypi.org
Command:
date;curl https://pypi.org/ -v --cacert squid_2.crt -k
Result: OK

2. rejection of communication to pypi.org if SNI is correct but
destination IP and HeaderHost are incorrect
Command:
date;curl https://pypi.org/ --resolve pypi.org:443:182.22.24.252 -H
"Host: www.yahoo.co.jp"  -v --cacert squid_2.crt -k
Result: OK (409 Conflict is returned)

3. rejection of communication to pypi.org if SNI and destination IP
are correct and HeaderHost is incorrect
Command:
date;curl https://pypi.org/ -H "Host: www.yahoo.co.jp" -v --cacert
squid_2.crt -k
Result: OK (409 Confilic returned)

4. rejection of communication to pypi.org if SNI and HeaderHost are
correct but destination IP is incorrect
Command:
date;curl https://pypi.org/ --resolve pypi.org:443:182.22.24.252 -v
--cacert squid_2.crt -k
Result: OK (409 Confilic returned)

5. if SNI, destination IP, and HeaderHost are all invalid (yahoo.co.jp
not registered in whitelist), communication will be rejected
Command:
date;curl https://yahoo.co.jp/ -v --cacert squid_2.crt -k
Result: NG (301 Moved Permanently is returned, but it appears that the
communication is reaching yahoo.co.jp)




_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
https://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
https://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux