On 11/05/2012 12:42 a.m., Lucas Diaz wrote:
This config repeats for all the workers on the frontend
if ${process_number} = 1
cache_peer 172.16.10.69 parent 3150 3250 htcp no-tproxy
cache_peer_access 172.16.10.69 deny no_proxy2
cache_peer_access 172.16.10.69 allow streaming_allow_url
cache_peer_access 172.16.10.69 deny all
cache_peer 172.16.10.73 parent 3151 3251 htcp no-tproxy
cache_peer_access 172.16.10.73 deny no_proxy
cache_peer_access 172.16.10.73 deny streaming_allow_url
cache_peer_access 172.16.10.73 deny videocache_method videocache_allow_url
cache_peer_access 172.16.10.73 allow www_bulk
cache_peer_access 172.16.10.73 deny all
htcp_port 3351
endif
The peer 172.16.10.69 is the same for all, and the other peer changes
from worker to worker. The peer 172.16.10.69 (the backend) also has a
rewriter for a few URLs.
The thing is that if I use the proxy frontend in a manual way (ie.
configuring my browser), it works, I mean, it sends the traffic to the
parent with the rewriter. On the other hand, if I intercept the
traffic on this frontend (which is also a router) with tproxy, it stop
working. All this using 3.2.0.13 or above.
With 3.2.0.7 it works either way, manual on with tproxy.
The config files are exactly the same for both versions, 3.2.0.7 and
3.2.0.13. Both proxy frontends are running in the same machine; I stop
one and try with the other and so on.
Thanks.
I'm suspecting this is relatesd to the Host header validation and
forgery detection added about .10.
Please try the 3.2.0.17 snapshot with revision of r11557 or higher. If
the problems remains, please check frontend cache.log entries about Host
header and frontend access.log entries for what TCP_* status codes
happen for those requests.
Amos