Thanks Amos, perfectly targeted. Now I guess I'll be taking my troubles
off the Squid list. The output clearly shows that it is IIS that is
returning the 413. The HLB is not IIS, so obviously it is coming from
the Exchange/CAS level. But if anyone can hazard a guess why the CAS
might be inclined to behave when not proxied but reject under Squid, I'm
all ears...
--bill
2013/08/18 09:15:01.193| processReplyHeader: key
'167D5E46E0E618965373B336E14716E9'
2013/08/18 09:15:01.193| GOT HTTP REPLY HDR:
---------
HTTP/1.1 413 Request Entity Too Large^M
Content-Type: text/html^M
Server: Microsoft-IIS/7.5^M
X-Powered-By: ASP.NET^M
Date: Sun, 18 Aug 2013 16:14:24 GMT^M
Connection: close^M
Content-Length: 67^M
^M
The page was not displayed because the request entity is too large.
----------
On 8/18/2013 2:27 AM, Amos Jeffries wrote:
On 18/08/2013 6:06 a.m., Bill Houle wrote:
Greetings! We have a Squid 3.1.10 (installed via yum on 64b CentOS6)
that
we are using as reverse proxy for Exchange. OWA, EWS, and RPC-over-HTTPS
seem to be operating without incident, but we have run into "request too
large" HTTP 413 errors with certain "large" ActiveSync POST messages
from
mobile phones. iPhone and Android, equal opportunity.
To be correct, these large messages really aren't that large - we're
talking kilobytes not mega. But they generate a 413 error and stay
stuck in
the phone's outbox. Other (smaller) messages sent after will sidestep
the
blockage and are sent thru.
Our Exchange 2010 is dual Client Access Server DAG fronted by a
hardware-
based network load balancer. Squid points to the HLB, the HLB to the
DAG,
and ultimately to the active CAS. If we run the same tests internally
(ie,
injecting the message at the HLB) everything goes thru fine. This would
seem to indicate that the source of the 413 is the proxy itself. But per
the squid config (below) we should be running at "unlimited" request
size,
so I'm not sure why 413 would be thrown.
The log snippet below should show a sync transaction from an iPhone
followed by a failed "large" message send attempt. This is followed by a
successful send of a smaller message - so we know a POST works - and
again,
a failed retry of the one that still remains queued.
I tried to correlate to cache.log running as "-k debug" but it is
difficult
with all the traffic.
Any ideas?
Try "debug_options 33,1 11,9" instead.
Amos