Search squid archive

Re: External ACL Auth & Session DB for 100+ clients behind NAT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Amos,

Thanks for your detailed response.

On Tue, May 22, 2012 at 4:56 AM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
>> external_acl_type hosted_auth ttl=0 %SRC  /etc/squid/auth.pl
>> acl loggedin external hosted_auth
>> deny_info https://hostedserver/auth.html loggedin
>> http_access deny !loggedin
>> http_access allow all
>>
> Please be ware there is no authentication in this setup, despite the login
> on your portal page.
> What you have is session-based *authorization*.
> It is a razor-thin line, but critical to be aware of. Since NAT erases and
> plays with the %SRC key which you are using to identify clients. 1) NAT
> hides unwanted visitors on the POP networks. 2) The XFF workaround to undo
> the NAT is header based with risks of header forgery. So NAT introduces
> multiple edge cases where attacks can leak through and hijack sessions.

I understand the difference between Authentication and Authorization,
but here the prime motive is to enforce user based access rules and
perform AuthN / AuthZ over a secured channel against IMAP.

If we segregate the zones as "Trusted" and "Non-Trusted" where the
trusted zone is our HO and a proxy forwards the requests to our
publicly hosted squid with XFF header while "Non-Trusted" zones are
our spokes and roadwarrior users who are behind a simple NAT. Trusted
zone users are allowed to access the proxy with just authorization
(session / form based) and Non-Trusted zone users need to authenticate
compulsorily (explicit proxy-auth). This way, we could enforce the
policies based on users instead of IPs.

Again, the problem is the secured authentication against IMAPS. Mail
is hosted on google and we can't use DIGEST that we receive from
browsers. BASIC auth is ruled out again due to security reasons. VPN /
Stunnel is not considered due to user credential / machine management.

>>  While the HTML file displays a login
>> form over HTTPS and sends request to a CGI script which authenticates
>> against IMAPS and populates the DB with session information. I
>> understand that I can not use cookies for authentication as browser
>> will not include cookie set by our authentication page for request to
>> other domains.
>
> Correct.

On some more googling, I found something called "Surrogate Cookies" here:
https://kb.bluecoat.com/index?page=content&id=KB3407
https://kb.bluecoat.com/index?page=content&id=KB2877

>From what I could understand is their primary usage is with the
reverse proxy in front of the webservers with limited domains behind
them but it is being used for surrogate authentication with normal
proxy deployments by forcing proxies to accept cookies for any domain?
Even the commercial proxies advise against using surrogate credentials
wherever possible. The major disadvantage I can see is they can't be
used with wget, lynx, elinks, java applets etc. which expect usual
proxy authentication.

> bit lacking in how to merge the format "%SRC %<{X-Forwarded-For}" into one
> UUID token. There is the space between the two tokens and XFF header is
> likely to contain spaces internally which the script as published can;t
> handle.
> HINT: If anyone has a fix for that *please* let me know. I know its
> possible, I stumbled on a perl trick ages back that would do it then lost
> the script that was in :(

Following snippet should help if you just want to strip spaces in the
$token string:

my $token = "%SRC %<{X-Forwarded-For}";
$token =~ s/\ //; # This should remove only the first space
$token =~ s/\ //g; # This removes all the spaces in the string

If you could send in sample strings - received and final expected
result, I can help with hacking Perl code.

I have also written an auth helper based on the existing POP3 auth
helper. It authenticates against IMAP and IMAPS depending on the
arguments provided e.g.:

## IMAPS against google but return ERR if user tries to authenticate
with @gmail.com
imap_auth imaps://imap.google.com mygooglehostedmail.com

## IMAP auth against my own IMAP server
imap_auth imap://imap.mydomain.com mydomain.com

Where should I submit that as contribution to Squid?

> Having edge proxies in the POP also enables you to setup a workaround for
> NAT which XFF was designed for....
> * The edge proxies add client (pre-NAT) IP address to XFF header, and
> forward to the central proxy.
> * The central proxy only trusts traffic from the edge proxies (eliminating
> WAN attacks).
> * The central proxy trusts *only* the edge proxies in an ACL used by
> follow_x_forwarded_for allow directive. Doing so alters Squid %SRC parameter
> to be the client the POP edge proxy received.
> This setup also allows you to encrypt the TCP links between POP edge proxies
> and central if you want, or to bypass the central proxy for specific
> requests if you need to, and/or to offload some of the access control to
> site-specific controls into the POP edge proxies.

Thanks for the detailed setup guidance. I have actually already put
the proxy in place as you have suggested and follow_x_forwarded_for is
working great as expected for the HO traffic.

> Depending on how complex and specific your access control is it may be worth
> pushing much of it into the POPs and having database links back to HQ for
> the smaller traffic load of details checking, rather than the full HTTP
> workload all going through HQ.

We want to keep it simple. One main proxy with all the rules
configured on it. This saves trouble of managing 60 site local rules.
May be, we can configure AV and Windows updates to be allowed directly
from the POPs without loading the central hosted proxy.

Thanks a lot again.

regards,
Nishant



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux