Search squid archive

Re: External ACL Auth & Session DB for 100+ clients behind NAT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22.05.2012 00:58, Nishant Sharma wrote:
Hi,

Greetings to all from a new user to the list.

A little background on my implementation scenario:

* There are around 60 site offices
* Each site has around 5-6 users
* Head Office has 100+ users
* Currently we are back-hauling all the traffic to HO and using squid
for access control

The obvious drawback is that site offices are not able to utilise
their full bandwidth (DSL 512kbps - 1Mbps) as HO is the bottleneck
with 4Mbps of 1:1 line. The alternative solution that we are working
on is to:

1. Configure squid on a hosted server
2. Ask all the users to configure the hosted proxy
3. Squid will be configured for Authentication
4. Authentication has to be done against IMAPS server

Now, the problem is, we can not use BASIC auth over public Internet
and if we use DIGEST auth, we can not authenticate against IMAP. I had
a look at external_acl_type authentication mechanism discussed in the
list and have configured something like:

external_acl_type hosted_auth ttl=0 %SRC  /etc/squid/auth.pl
acl loggedin external hosted_auth
deny_info https://hostedserver/auth.html loggedin
http_access deny !loggedin
http_access allow all

This auth.pl will check against a session DB (probably MySql) if user
is already authenticated or not.

Please be ware there is no authentication in this setup, despite the login on your portal page.

What you have is session-based *authorization*.

The difference is that in real auth the client has to be who they claim. In sessions any attacker which can copy or generate a clients session details can access through the proxy, the client details are checked but not validated beyond the request where session was created.

It is a razor-thin line, but critical to be aware of. Since NAT erases and plays with the %SRC key which you are using to identify clients. 1) NAT hides unwanted visitors on the POP networks. 2) The XFF workaround to undo the NAT is header based with risks of header forgery. So NAT introduces multiple edge cases where attacks can leak through and hijack sessions.


 While the HTML file displays a login
form over HTTPS and sends request to a CGI script which authenticates
against IMAPS and populates the DB with session information. I
understand that I can not use cookies for authentication as browser
will not include cookie set by our authentication page for request to
other domains.

Correct.


I went through Amos' ext_sql_session_acl.pl which I am planning to use
in place of auth.pl. But here's another catch - since there are more
than 1 users behind the NAT, what parameters like %SRC could be used
to identify a user uniquely in the session database, which should be
persistently present in every request to Squid?

I suggest the %>{X-Forwarded-For} as well. In its entirety the XFF header *should* be containing a whole path from the client to your proxy. It is unsafe to trust every entry individually, but the whole thing can be hashed to unique value for each path to an end-client.


I see a mention of the UUID tokens in the script as well, but was not
able to understand how to use them.

The UUID is the %SRC parameter passed in.

As I noted with publication, the script is not perfect. My perl skills are a bit lacking in how to merge the format "%SRC %<{X-Forwarded-For}" into one UUID token. There is the space between the two tokens and XFF header is likely to contain spaces internally which the script as published can;t handle. HINT: If anyone has a fix for that *please* let me know. I know its possible, I stumbled on a perl trick ages back that would do it then lost the script that was in :(


The script is designed for Captive Portal use, where the clients are connecting directly to the proxy. To use it in a hierarchy I recommend having a local proxy at each POP which forwards to your central proxy. The edge proxies set XFF header for your central proxy to use.


Having edge proxies in the POP also enables you to setup a workaround for NAT which XFF was designed for....

* The edge proxies add client (pre-NAT) IP address to XFF header, and forward to the central proxy. * The central proxy only trusts traffic from the edge proxies (eliminating WAN attacks). * The central proxy trusts *only* the edge proxies in an ACL used by follow_x_forwarded_for allow directive. Doing so alters Squid %SRC parameter to be the client the POP edge proxy received.

This setup also allows you to encrypt the TCP links between POP edge proxies and central if you want, or to bypass the central proxy for specific requests if you need to, and/or to offload some of the access control to site-specific controls into the POP edge proxies.


Depending on how complex and specific your access control is it may be worth pushing much of it into the POPs and having database links back to HQ for the smaller traffic load of details checking, rather than the full HTTP workload all going through HQ.


Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux