Search squid archive

Re: Connection error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18/01/11 16:36, Senthilkumar wrote:
Hi ,

I have increased ntlm scheme children's and even though i am getting
error message in cache log
All ntlmauthenticator processes are busy and wbinfo.pl is busy is
stopped now.

I have attached my squid.conf please check it and share your views if
anything is done wrong .


#Authentication
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 100
auth_param basic credentialsttl 8 hours

Notice how the credentialsttl directive there applies to Basic auth protocol.

NTLM credentials are held in the TCP connection details themselves. And stays in place until the TCP link they apply to is closed. There is no relevant TTL.


authenticate_ttl 4 hours

auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 10
auth_param basic realm PrimalHealth care services
auth_param basic credentialsttl 8 hours

#group Authentication
external_acl_type groupauth children=50 %LOGIN
/usr/local/squid31/libexec/wbinfo_group.pl

#Acl for checking group
acl senior1 external groupauth senior
acl dept1 external groupauth dept
acl human1 external groupauth human
acl srgp1 external groupauth group
acl gl1 external groupauth leader
acl nm1 external groupauth normal
acl mancom1 external groupauth man

#Acl to allow and block websites
####
acl senior2 url_regex -i "/usr/local/squid31/policy/allow.txt"

acl senior3 dstdomain -i "/usr/local/squid31/policy/allow1.txt"
acl senior4 dstdomain -i "/usr/local/squid31/policy/allow3.txt"
####

acl dept2 url_regex -i "/usr/local/squid31/policy/allow4.txt"
acl dept3 dstdomain -i "/usr/local/squid31/policy/allow5.txt"
####

acl gl2 url_regex -i "/usr/local/squid31/policy/allowleader"
acl gl3 url_regex -i "/usr/local/squid31/policy/denyleader"
####
acl srgp2 url_regex -i "/usr/local/squid31/policy/allow6"
acl srgp3 dstdomain -i "/usr/local/squid31/policy/allow7"
####
acl nm2 url_regex -i "/usr/local/squid31/policy/allow8"
acl nm3 url_regex -i "/usr/local/squid31/policy/deny9
acl nm4 dstdomain -i "/usr/local/squid31/policy/deny9"
###
acl mancom2 url_regex -i "/usr/local/squid31/policy/allowgl2"
acl global url_regex -i "/usr/local/squid31/policy/allowgl1"
###
acl noblock src "/usr/local/squid31/policy/allowdirect"

#Http_access

http_access allow manager localhost
http_access deny manager

http_access allow noblock
http_access allow global

The rest of your rules look like classic mistakes people are always making. Sorry if this gets to close to a lecture. I'm taking the opportunity to demonstrate some simple but effective optimizations for all the list readers.

Before we start;
the rules as posted require a total of 5 helper lookups *minimum* for each and every request. With up to a total of 14 helper lookups for some users single request.



First thing to notice. Almost all of the following ACL security tests rely on auth having been performed for at least one of their criteria.

We could take advantage of this and force auth to be used by all visitors.

  acl authed proxy_auth REQUIRED
  http_access deny !authed

At this point we don't care who they are or whether they are allowed to use the proxy. Only that they have credentials and the credentials are valid. The "allow all" final rule prevents this. Since people with invalid credentials may get there and be permitted.


Now to the first actual permission rule:

http_access allow senior1 senior3

 Start by looking at the types of these ACL.
 * senior3 is a dstdomain, one of the fastest ACL available.
* senior1 is an external ACL using auth details. This requires not one but maybe two stop-and-wait actions while both auth and external-acl helpers produce results.

Ordering these ACL so the fastest one is run first will prevent the long waits and extra work from being done on later

The fast result:
 http_access allow senior3 senior1



The next permission rule:

http_access deny senior1 senior4 all

This rule suffers from the same problem as the earlier one.
 * senior4 is again a fast dstdomain
 * senior1 being a slow helper lookup

One extra thing to notice is the "all" at the end.

There is only one reason for using "all" in this way. Which is to suppress Squid requesting auth credentials when none have been given or the ones given are invalid. Specifically to note is that it ONLY works when the last ACL on the line right before the "all" is an ACL which challenges for auth. Namely one of type proxy_auth or one of type external where the external-acl helper uses %LOGIN.

As the rule is currently written we have "all" suppressing a dstdomain from challenging for auth. Since dstdomain does not challenge the "all" is completely useless.

After re-ordering the ACL for speed and reduced auth workload this state changes. The "all" *might* be useful there if the user had not provided any auth credentials at all. Note this is one reason why we suggest "http_access deny !authed" above. That line would ensure the credentials exist at this point and "all" would be not needed.

The optimization result:
  http_access deny senior4 senior1 all
or
  http_access deny senior4 senior1



Repeating these two simple steps of looking at the ACL types then sorting by speed you can optimize these rules down so that at most one or only a few helper lookups are done. Sometimes none at all.

http_access allow dept1 dept3
http_access allow gl1 gl2
http_access deny gl1 gl3 all
http_access allow srgp1 srgp3
http_access deny nm1 nm4 all
http_access allow nm1 nm2

This final rule is scary. Right now the external ACL and/or auth helper is being called on every request due to the inefficient testing order.

When the request testing is optimized to operate fast we have the potential for many requests to reach here without being authenticated at all.

Rather than a universal "allow all" a choice should be made whether auth is required or optional to use the proxy.

http_access allow all

squidGuard
url_rewrite_program /usr/local/squidGuard/bin/squidGuard -c
/usr/local/squidGuard/squidGuard.conf
url_rewrite_children 50
url_rewrite_access allow !noblock !senior2 !dept2 !gl2 !gl3 !srgp2 !nm2
!nm3 !mancom2 !global
url_rewrite_access deny all


A side issue, instead of performing this whole list of actions. I note that most of these are external ACL based lookups. Using the tag=X return result from external-ACL helpers you could set a tag to indicate that *any* group has been tested and matched. Checking that ACL here would eliminate in one step several tests in the re-writer condition.


Thanks
Senthil

Amos Jeffries wrote:
On 15/01/11 07:35, Senthilkumar wrote:
Hi All,

I am using Squid Cache: Version 3.1.8, configured NTLM scheme using
samba, CLAM Av + ICAP and Squid guard.
All of the clients are Windows machine joined in domain. The browser
authenticates using ntlm scheme without pop up for password and
everything working fine.

We have two issues:
1.We are using many acls to allow and deny websites on the basis of the
ADS groups using wbinfo.pl. Time to time the users are reporting that
the authentication pop up occurs .
In cache.log we can find the following

2011/01/14 12:27:50| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:27:50| WARNING: 25 pending requests queued
2011/01/14 12:56:48| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:56:48| WARNING: 25 pending requests queued
2011/01/14 12:57:36| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:57:36| WARNING: 25 pending requests queued
2011/01/14 14:00:03| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 14:00:03| WARNING: 25 pending requests queued
2011/01/14 14:00:06| WARNING: Closing open FD 229
2011/01/14 14:01:09| WARNING: All ntlmauthenticator processes are busy.

We just increased it to 30 for ntlm and 30 for wbinfo(external) still it
occurs. Does ntlm scheme has any new behaviour?


Also, wbinfo has a maximum capacity limit of only ~256 lookups, shared
across all helpers AFAIK. When this limit is exceeded the lookups get
queued. When queue fills clients are rejected.

2.When we browse a website and leave browser idle for 30 - 60 minutes ,
cannot display page occurs.

strange.

In squid.conf we have used following values
half_closed_clients off
client_persistent_connections off
server_persistent_connections off
Whether squid has this as default behaviour?, suggest s suitable options
in squid conf to overcome it.

Eek!

Firstly, NTLM schemes authenticates a TCP connection, *not* a user.

Secondly, NTLM scheme requires *three* HTTP full requests to be
performed to authenticate and fetch an object.

So... without persistent connections your Squid and its client
browsers are consuming up to 3x the amount of traffic (and bandwidth)
they normally would be.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux