Search squid archive

RE: Authentication to Sharepoint not happening

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Amos. Yeah they were cut and paste errors. Other than that I have tried using http11 with http_port and ignore_expect and it still doesn't work.

I think this is by design in Squid. Following code in "client_side.c" suggests that it will always filter the "WWW-Authenticate" header from HTTP Headers by treating it as unproxyable auth type.

   /* Filter unproxyable authentication types */
    if (http->log_type != LOG_TCP_DENIED &&
    (httpHeaderHas(hdr, HDR_WWW_AUTHENTICATE))) {
    HttpHeaderPos pos = HttpHeaderInitPos;
    ....
    ....
    ...code here removes the "WWW-Authenticate" from HTTP Header.

Also the following link "http://www.visolve.com/squid/Squid_tutorial.php#Authentication_";  suggests that Proxy Auth can't work in transparent mode.

Can you please comment on this?

Regards,
Saurabh

-----Original Message-----
From: Amos Jeffries [mailto:squid3@xxxxxxxxxxxxx] 
Sent: Tuesday, February 01, 2011 3:34 PM
To: squid-users@xxxxxxxxxxxxxxx
Subject: Re:  Authentication to Sharepoint not happening

On 01/02/11 21:29, Saurabh Agarwal wrote:
> Hi Amos
>
> I am using squid.2.7.STABLE7. Following is my configuration. I want to allow everything.
>
> http_port 192.168.11.35:3128 transparent
> acl from_localhost src 192.168.11.35

> http_port 10.102.79.82:3128 transparent
> acl from_localhost src 10.102.79.82
> http_port 10.102.79.82:3128 transparent
> acl from_localhost src 10.102.79.82

cut-n-paste error? http_port and ACL is defined twice.

> visible_hostname hostname
> acl foreign_networksAux1 dst
> acl foreign_networksapA dst 0.0.0.0/0

above ACL collapses to "acl foreign_networksapA dst all"

> tcp_outgoing_address 192.168.11.35 foreign_networksAux1
> tcp_outgoing_address 10.102.79.82 foreign_networksapA

May as well drop "foreign_networksapA" off that tcp_outgoing. It has no 
meaning.

> access_log none
> cache_log /dev/null

cache_log is not optional for very good reasons. If you are that worried 
about stuff being logged set "debug_options ALL,0" to receive only the 
critical failure events.

<snip>
> debug_options ALL,1
>
<snip>
> debug_options ALL,1
>
> acl manager proto cache_object
> acl all src 0.0.0.0/0.0.0.0
> acl all_dst dst 0.0.0.0/0.0.0.0

Easier to read and forward-portable:
   acl all src all
   acl all_dst dst all

Note that "dst all" means that all domains with DNS resolvable 
destinations. "src all" means coming from a machine via IP protocol.

> http_access allow manager from_localhost
> http_access deny manager
> http_access allow all all_dst

Translation:
   allow a request if it arrives from a machine with an IP address and 
is destined to a machine which has an IP address.

Nice...  Open proxy with no logging and transparent hijacking on a 
standard port 3128 :).
Good thing your public IP is a little bit obscured.


This looks like a slightly confused configuration based on a loose 
explanation of the tcp_outgoing_address "dst" hack.

The real hack is to place this above any "http_access allow" lines:
   http_access deny all_dst !all

meaning: perform DNS lookup on the destination (thus caching the result 
for tcp_outgoing_address to use) then skip to the next http_access line 
due to an impossible test.

After doing that hack you *still* have to setup permissions as to who is 
allowed to access the proxy.

<snip>
>
> hierarchy_stoplist cgi-bin ?
> acl store_rewrite_list urlpath_regex \/(get_video\?|videodownload\?|videoplayback.*id)
> acl store_rewrite_list1 dstdomain .youtube.com .video.google.com \/(get_video\?|videodownload\?|videoplayback.*id)

cut-n-paste error? " \/(get_video\?|vi..." is not a valid domain name.
<snip>
>
> client_persistent_connections on
> server_persistent_connections on

Good.

<snip>
>
> # Shorten timeouts
> negative_ttl 5 minutes

Bad. This means: DoS all clients of a URL whenever a 4xx or 5xx happens 
on it.

This may be related to the 401 followup not working well.

Recommended value:
   negative_ttl 0 seconds


> connect_timeout 1 minute
> peer_connect_timeout 30 seconds
> read_timeout 15 minutes
> request_timeout 5 minutes
> half_closed_clients off
> pconn_timeout 1 minute

NTLM and Negotiate require two pconn pinned together to operate. This 
timeout will directly affect how often those paired TCP links are 
discarded and require new auth handshakes.


So in summary, other than negative_ttl and a small pconn_timeout 
possibly affecting things this config looks like it should pass the auth 
headers just fine.


One other possibility you could try since this is 2.7 is the HTTP/1.1 
options.
   http_port ... http11

and these two:
http://www.squid-cache.org/Versions/v2/2.7/cfgman/server_http11.html
http://www.squid-cache.org/Versions/v2/2.7/cfgman/ignore_expect_100.html

The server_http11 is safest with no known problem side effects. The 
http_port change may require ignore_expect_100 to fix broken clients. 
Though such broken client apps are slowly disappearing now.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.10
   Beta testers wanted for 3.2.0.4



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux