Search squid archive

Re: squid 3.2 and POST

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16/07/2013 2:10 a.m., Eugene M. Zheganin wrote:
Hi.

I use caches in a corporate environment and their most purpose is
authorization and accounting, so I use various AD-authorization schemes.
Recently I switched most of my proxies to squid 3.2.x, and got a
problem. The problem appears on various upload sites across the internet
(you know, like depositfiles and so on - the sites that hols user's
data). When user tries to upload a file, such a site and a user's
browser exchange series of requests and replies, for example
GETGET/OPTIONS/POST, and squid serves each request after he issues a 407
header to a client browser, and a browser, in it's turn, resends the
request with a proxy authentication token. Everything is fine when a
files is relatively small, but when user tries to send large file (I
don't know where the border starts, for example 700 Kbytes is okay, but
17 megabytes is not) squid, for some reason, doesn't send the 407 header
after first POST from a browser which starts the upload of an actual
file (I short words - first "large" POST isn't answered by squid and
isn't served). I captured the whole sequence with tcpdump and examined
it with wireshark.

Is this "first POST" using a brand new connection? or a connection which has already been opened and previously authenticated?

What can be a problem here ? I tried to switch off the keepalives from
SPNEGO/NTLM schemes I'm using but this didn't help.

The auth_param "keep-alive" option is the only thing which would possibly be ablt to turn off. That one "just" causes connection closure after the first 407 sent by Squid in NTLM resulting in the stage-1 handshake request aborting early and a second connection being opened with client credentials sent on the clients next "first" request.


Assuming the client does that GET-1/GET-2/OPTIONS/POST sequence with two different GET's. The behaviour I would expect to see with NTLM in Squid-3.2 using the default HTTP/1.1 and squid.conf persistence settings is:

(a)
 * the client opens a new connection
 * client sends GET (#1)
* Squid 407 challenges for credentials offering NTLM as one of the proxy-auth methods
(b)
 * Client sends GET (#1) with a hash token (type-1)
 * Squid 407 challenges with a server token
 * client sends GET (#1) with shared token (type-3)
 * Squid delivers the response to GET #1.
(c)
 * client sends GET (#2) with shared token (type-3)
 * Squid delivers the response to GET #2.
(d)
 * client sends OPTIONS with shared token (type-3)
 * Squid delivers the response to OPTIONS.
(e)
 * client sends POST with shared token (type-3)
 * Squid delivers the response to POST.

All of this done over one HTTP/1.1 connection. 407 only are sent twice by Squid for the whole sequence.

** If you turn off the "auth_param ntlm keep-alive" setting Squid will close the connection at point marked (b), the client will open a new one and continue the sequence using that.

** If you turn off HTTP's normal server_persistent_connections or client_persistent_connections settings Squid will close the connection at points (b), (c), (d), (e), but keep it persisting "pinned" between points (b) and (c). You end up with each request requiring 1 or 2 TCP connections to be setup and authenticated before the response gets through but otherwise *appearing* to work fine (but slow) from the users perspective. Points (c), (d), and (e) all become a repeat of the setup sequence (a)->(b). When you do that at (e), or send a POST on a new connection for any other reason, the client ends up re-sending the entire POST body object x3 just like GET #1 was re-sent x3. Large POST bodies start to show how broken NTLM is at that point.


Nasty as they are, the above are the perfectly normal "working" NTLM behaviour. If your traces are showing something else going on *by Squid* then you have a bug. We have indeed found a few such bugs in Squid NTLM and persistence handling, once you have confirmed that it is a bug and not just one of the above "working" NTLM problems please repeat the test using the latest 3.3 release and if possible the latest 3.HEAD daily bundle to see if it is one we found and fixed already. http://wiki.squid-cache.org/SquidFaq/BugReporting has more on the process of reporting.

NP: There was one bug fixed in 3.3.1 related to HTTP/1.1 keep-alive which was showing up in some NTLM clients and is possibly seen in 3.2 still. The Squid-3.3 patch can be found here http://www.squid-cache.org/Versions/v3/3.3/changesets/squid-3-10728.patch although the preferred action is of course to upgrade to latest 3.3. Squid is on a release-often cycle now so the 3.2->3.3 changes are quite small compared to previous version differences (much safer for production servers to do than ever before).


PS. I expect to have some time in the next few weeks and will be looking into similar issues with another client if you need a developer to take a closer look at fixing it and can pay for development support time. Contact me privately about support contracts please. Of course with any luck it will be that bug 2936 or another unidentified issue fixed in 3.3.

HTH
Amos





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux