Search squid archive

[squid-users] Banning all other destinations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
 
Interleaving the acls and http_access lines should work just fine.  I'd 
change the dstdom_regex to dstdomain, because as it stands now, anything 
with ".gov" anywhere in the domain (where the dot can represent any 
character i.e. thegovenator.com), will be allowed through.  Same thing for 
the .edu acl.  Neither of which explains how you managed to surf to 
www.elephants.com.  The http_access allow www would let you, but it should 
never be reached (all traffic should be blocked by the "deny ip" line). 
Replacing the three deny lines (and their associated acls) with a 
http_access deny all would also help lead to more clarity. 
 
Two choices here.  Post your whole squid.conf file (preferably minus 
comments and blank lines) or utilize Squid's native debugging capabilities. 
Using "debug_options ALL,1 33,2" will give a pretty good step-by-step of how 
squid is acting on ACLs for each web request (output to the cache.log).  Be 
aware, the output is quite verbose, so it's not something that you likely 
want to use on a production server. At least not for long. 
 
One last question...  Are you telling Squid to reconfigure (or restart) 
after each change to the config file?  It may be obvious, but it never hurts 
to ask. 
 
Chris 
------------------------------------------------------------ 
 
Looks like it was my syntax. 
 
I always stop Squid before changing the .conf. 
 
I enabled debug, deleted all the recommended rules that I don't understand, and added only   
the rules that interest me.  (I can add other rules one at a time after I get it working) 
 
The dstdomain .gov denied .gov.au so I reverted to dstdom_regex although, like the   
California Democrats, I don't want the govenator. 
 
Interleaving works, and ANDing the ACLs in the rules makes the intent even clearer. 
 
ACL is checked before getting from cache. 
 
Squid goes out to the internet before getting cached pages, after a period of idleness.  I   
don't have a good handle on this. 
 
The last rule does what it says, not the inverse. 
 
Changing the rules had some side effects. 
1) the 30 sec delay on shutdown started working and, after some more rule changes,   
stopped working.  It does not matter. 
2) I now have access denied error messages, in Hebrew.  Perhaps it is better that users   
who try naughty things are baffled, rather than taunted by a comprehensible message. 
 
Here are my rules:- 
 
#  TAG: acl 
 
#  TAG: http_access 
acl all src 0.0.0.0/0.0.0.0 
acl localnet src 192.168.100.0/24 
acl OKdomains dstdom_regex -i .gov. .edu. .google.com.au 
http_access allow localnet OKdomains 
acl every dst 0.0.0.0/0.0.0.0 
http_access deny every 
 
#  TAG: http_reply_access 
http_reply_access allow localnet 
http_access deny all 
 
I am inestimably grateful for your patience which has saved my life, well, at least my   
sanity. 
 
Thanks. 
 




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux