Search squid archive

Re: Squid returns 400 to GET / HTTP/1.1 with Host Header

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Mon, Apr 23, 2018 at 5:58 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
> On 24/04/18 04:03, Stephen Nelson-Smith wrote:
>> Hi,
>>
>> On Mon, Apr 23, 2018 at 4:48 PM, Stephen Nelson-Smith wrote:
>>
>>> Adding that functionality would be an option,
>
> I think that is worth asking Mark Nottingham about adding that
> functionality.

I'll open an issue on the repo.  I did already fork it to add the 'use
a proxy' functionality, and would be happy to contribute such
functionality.  Would be a good way to get my head around it all
anyway.

>>> but am I right in
>>> thinking squid should be able to infer the destination from the host
>>> header?
>
> No, that is rather dangerous. The CVE-2009-0801 and related nest of
> vulnerabilities are opened up if Host header is trusted by a proxy.

Thanks for explaining - I'll look into that, but I can see what you mean.
>
>>>
>>> Just looking at the documentation for http_port, would adding
>>> 'intercept' help, or is that explicitly for interception caching in
>>> conjunction with a traffic filter?
>>
>> Adding `intercept` to `http_port` has resulted in the host header
>> appearing as the URL in the request.
>>
>> Squid is now giving a 403... which it shouldn't... I think:
>
> That is the CVE-2009-0801 protections doing their thing for intercept'ed
> traffic (second log line). The 10.8.0.33 IP is where the client was
> apparently going before MITM'd into the proxy, so the server there MUST
> be able to handle whatever the client is expecting back regardless of
> whether the proxy trusts it for caching purposes.
>
> But 10.8.0.33 is your Squid, so the traffic loops (first log line).
> Squid detects the loop and rejects it to prevent infinite memory and TCP
> port numbers being consumed.

Right.  I understand in a traditional transparent proxy environment
we'd handle this with iptables/pf.

Short of making Redbot behave better, through a proxy, is there a
solution I can use that will get me through my demo (I have to demo
this tomorrow) without resorting to a bunch of curls?  As long as
Redbot can make requests to a bunch of URLs and I can show the results
in the browser (failures and successes), I can worry about doing it
properly later - this is a throw-away environment, and won't exist
after the demo.  The point of the demo is to show the proxy working
and being used by a web app.

Thanks for your time and insight - this is all tremendously useful and
informative.

S.
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux