Search squid archive

Re: acl for redirect

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Rafael, We're trying to keep the setups lean, and primarily just deal with google and youtube, not all websites. ICAP processes deal with a whole new layer of complexity and usually cover all websites, no just the few.

On 6/30/2015 16:17 PM, Rafael Akchurin wrote:
Hello Mike,

May be it is time to take a look at ICAP/eCAP protocol implementations which target specifically this problem - filtering within the *contents* of the stream on Squid?

Best regards,
Rafael

Marcus,

This is multiple servers used for thousands of customers across North America, not an office, so changing from a proxy to a DNS server is not an option, since we would also be required to change all several thousand of our customers DNS settings.

On 6/30/2015 17:30 PM, Marcus Kool wrote:
I suggest to read this:
https://support.google.com/websearch/answer/186669

and look at option 3 of section 'Keep SafeSearch turned on for your network'

Marcus

Such a pain, there is no reason for our every day searches should be encrypted.


Mike

-----Original Message-----
From: squid-users [mailto:squid-users-bounces@xxxxxxxxxxxxxxxxxxxxx] On Behalf Of Mike
Sent: Tuesday, June 30, 2015 10:49 PM
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re:  acl for redirect

Scratch that (my previous email to this list), google disabled their insecure sites when used as part of a redirect. We as individual users can use that url directly in the browser (
http://www.google.com/webhp?nord=1 ) but any google page load starts with secure page causing that redirect to fail... The newer 3.1.2 e2guardian SSL MITM requires options (like a der certificate file) that cannot be used with thousands of existing users on our system, so squid may be our only option.

Another issue right now is google is using a "VPN-style" internal redirect on their server, so e2guardian (shown in log) sees
https://www.google.com:443  CONNECT, passes along TCP_TUNNEL/200
www.google.com:443 to squid (shown in squid log), and after that it is directly between google and the browser, not allowing e2guardian nor squid to see further urls from google (such as search terms) for the rest of that specific session. Can click news, maps, images, videos, and NONE of these are passed along to the proxy.

So my original question still stands, how to set an acl for google urls that references a file with blocked terms/words/phrases, and denies it if those terms are found (like a black list)?

Another option I thought of is since the meta content in the code including title is passed along, so is there a way to have it can the header or title content as part of the acl "content scan" process?


Thanks
Mike


On 6/26/2015 13:29 PM, Mike wrote:
Nevermind... I found another fix within e2guardian:

etc/e2guardian/lists/urlregexplist

Added this entry:
# Disable Google SSL Search
# allows e2g to filter searches properly
"^https://www.google.[a-z]{2,6}(.*)"->"http://www.google.com/webhp?nord=1";


This means whenever google.com or www.google.com is typed in the
address bar, it loads the insecure page and allows e2guardian to
properly filter whatever search terms they type in. This does break
other aspects such as google toolbars, using the search bar at upper
right of many browsers with google as the set search engine, and other
ways, but that is an issue we can live with.

On 26/06/2015 2:36 a.m., Mike wrote:
Amos, thanks for info.

The primary settings being used in squid.conf:

http_port 8080
# this port is what will be used for SSL Proxy on client browser
http_port 8081 intercept

https_port 8082 intercept ssl-bump connection-auth=off
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-
RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH


sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M
16MB sslcrtd_children 50 startup=5 idle=1 ssl_bump server-first all
ssl_bump none localhost


Then e2guardian uses 10101 for the browsers, and uses 8080 for
connecting to squid on the same server.
Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
connection in re-encrypted on outgoing.


I am doubtful eth nord works anymore since Googles own documentation
for schools states that one must install a MITM proxy that does the
traffic filtering - e2guardian is not one of those. IMO you should
convert your e2guardian config into Squid ACL rules that can be
applied to the bumped traffic without forcinghttp://

But if nord does work, so should the deny_info in Squid. Something
like this probably:

   acl google dstdomain .google.com
   deny_info 301:http://%H%R?nord=1  google

   acl GwithQuery urlpath_regex ?
   deny_info 301:http://%H%R&nord=1  GwithQuery

   http_access deny google Gquery
   http_access deny google


Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users


_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux