On 28/06/11 23:23, E.S. Rosenberg wrote:
Hi, We recently switched to an NTLM based setup and tehre is one quite annoying fluke for users that are not in the domain, when they open their browser they get multiple auth requests.
Well. Yes. That is how NTLM is supposed to provide security better than Digest. Users who don't have credentials checked by the DC can't authenticate.
Or are you using the words "in the domain" in a different meaning to what NTLM uses for in and out of domain? (registered with the DC "in" and not registered "out" / general public machines)
This is probably because the browser issues multiple requests and therefor gets multiple 407s back from squid, is there any way to avoid this? To make sure that the user only needs to type his/her password once (if they don't make a mistake)?
maybe yes, maybe no. Depends on your version of Squid. We have had people do a lot of deep analysis of NTLM behaviour and fix many problems thoughout the 3.1 series. Some were only fixable in 3.2 betas due to the nature of changes.
The big thing to be aware of is that persistent connections is not optional. They are REQUIRED.
Also depends on the users system. The browser is what makes the choice to (a) open that many connections at once, and (b) do the popup. NTLM credentials are Single-Signon, supposedly provided to the browser by the operating system. The user should not actually ever see even one popup from the browser. The popup is an effort of last resort for browsers.
For a user like me that when opening the browser is restoring tens if not hunders of tabs the amount of auth requests can be quite frustrating.
The browser cant find your credentials from your machine login, OR the proxy cannot verify them once they are handed over.
A different question: I shortened the shutdown_lifetime to 5 seconds (from the default 30 seconds) so that downtime when I change a setting that requires a restart instead of a reload is shorter, is there any reason to not shorten this (possibly even to 1 or 0?)?
shutdown_lifetime is the amount of time Squid is allowed to spend on a full save of the cache index and finishing clients requests. The smaller it is the more clients see failures and the longer startup times Squid may have while rebuilding a broken index from scratch before it can start operating at full speed.
I can live with a download having to be done again but half the campus not browsing is much less ideal... Thanks and regards, Eli
Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.12 Beta testers wanted for 3.2.0.9 and 3.1.12.3