Search squid archive

Re: Regarding url_regex acl

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/07/2013 11:24 a.m., Alan wrote:
This looks wrong:
http_access deny !allowdomain

It is a strong whitelist. It means deny everything that is not explicitly whitelisted by that ACL test.

Try:
http_access deny allowdomain

This means deny the whitelisted sites. Wrong

On Fri, Jul 5, 2013 at 5:16 AM, kannan rbk wrote:
Dear Team,

I am using squid proxy 3.1 in centos machine. I want to restrict
client request from particular domain and web context.

Okay...


  #
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 ::1

acl urlwhitelist url_regex -i ^http(s)://([a-zA-Z]+).zmedia.com/blog/.*$

".*$" is equivalent to ".*" which is equivalent to "" (not having an ending on the pattern at all). All it does is force a lot of extra work on the regex pattern matcher and CPU.

"(s)" in rejex is equivalent to "s". It looks like it may have been intended to be "[s]" or maybe "(s|)" but got broken.

Meaning the pattern above is actually doing:
  acl urlwhitelist url_regex -i ^https://([a-zA-Z]+).zmedia.com/blog/

Now ... most clients will not send https:// URLs to an HTTP proxy directly (some do, but not many). When you are passing HTTPS traffic through an HTTP proxy the common way for clients to ensure security is to use the CONNECT method request to setup a tunnel to the SSL host. In those requests the URL consists of *only* a host name or IP and a port number (HTTPS being port 443) - there is no scheme: or /path/ visible to the HTTP proxy.

In short the above ACL is a overly-complex way to test for something which will never be possible to match.

acl allowdomain dstdomain .zmedia.com

acl Safe_ports port 80 8080 8500 7272

What is this above line for? port 80 is listed below for HTTP traffic and the others are contained in the 1024-65535 unregstered ports range.

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed

acl SSL_ports port 443
acl Safe_ports port 80      # http
acl Safe_ports port 21      # ftp
acl Safe_ports port 443     # https
acl Safe_ports port 70      # gopher
acl Safe_ports port 210     # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280     # http-mgmt
acl Safe_ports port 488     # gss-http
acl Safe_ports port 591     # filemaker
acl Safe_ports port 777     # multiling http
acl SSL_ports  port 7272        # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#

These below are supposed to be the default security rules protecting your proxy from some very nasty abuses. Please touch only with great care and when understanding what *all* parts of the block do. ** when operating a reverse-proxy the Internet-to-servers traffic rules go above this section. ** when operating a ISP forwarding proxy the LAN-to-Internet traffic rules generally go below this section.

# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

The above two rules are controlling access to your proxy mangement functionality. This may be useful to you (or maybe not). If you don't want to or find no need to access the cache mgr reports you can choose to remove both of the above lines.

... you have then placed local rules amongst the defaults. Right here should be the rules rejecting "!Safe_ports" and "CONNECT !SSL_Ports". To quickly protect against unsafe port usage on any request and CONNECT requests to non-HTTPS traffic ports.


http_access deny !allowdomain

*that* above is the reason your url_regex ACL has the illusion of working. It will reject CONNECT requests (tunnel of HTTPS data) for domains outside the one you allow.

http_access allow urlwhitelist
Squid will test that URL regex. It will not match anything due to firstly the broken "https://"; part of the pattern and secondly the traffic you want it to match being hidden inside CONNECT tunnel encryption layer. So Squid will continue on down the ACLs ...

http_access allow CONNECT SSL_ports

This is the another reason yoru rules have the ilusion of working. *any* HTTPS request is accepted.

The above is *BAD* practice. By itself it allows unrestricted tunnels from any source to any destination on port 443. When following the "deny !allowdomain" rule above it is a lot safer, but still allows unlimited access to the whole of the .zmedia.com HTTPS domain and sub-domains.

CONNECT is a terribly nasty feature of HTTP as the payload on the request and response is just an unending stream of binary data in both directions. There is very limited control possible. The default security protection (which you shuffle all the way down below) is the correct way to permit CONNECT requests (any CONNECT not denied is potentially allowed).


http_access deny CONNECT !SSL_ports
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

Uhm.  The above line is in there twice now.



# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

... NOTE: pelase read that above 1-line message. Your local policy rules whould be *here*. Not embeded above half the default security rules.



# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

Recommendation has been re-evaluated since 3.1 was released.
You can safely drop that hierarchy_stoplist line from your config file.

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid
append_domain .zmedia.com

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:       1440    20% 10080
refresh_pattern ^gopher:    1440    0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .       0   20% 4320

What am I trying here is?

Restrict request only from(.zmedia.com) and it's working fine.

No. You are restricting only ...    destination(.zmedia.com).

  But , I
can able to access request from any context (url_regex  not working).

URLS Should be allowed.

https://accounts.zmedia.com/blog/aboutusers.html
https://contacts.zmedia.com/blog/aboutusers.html
https://shop.zmedia.com/blog/aboutusers.html
...

URLS Should not be allowed

https://shop.zmedia.com/admin/aboutusers.html

But , now I can able to access(https://shop.zmedia.com/admin/aboutusers.html) .

I am getting the strong suspicion that you cut-n-pasted config from somewhere else without understanding it. Most of these rules in both style and positioning only make sense in a reverse-proxy configuration, your policy description even sounds a lot like a reverse-proxy policy. But you have configured a forward-proxy, where enforcing that particular type of policy requires a large amount of dangerous and complex configuration.

If you *are* setting up a reverse-proxy that policy would be able to be easily configured. But you need to do the setup properly first.

Please outline why you are enacting this policy, and what relation the proxy will have to the zmedia.com web servers. We can point you in the right direction for a fix only with that knowledge.

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux