Search squid archive

Re: TProxy and client_dst_passthru

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Amos,

216.58.220.36 != www.google.com ??? 
Have a look: http://www.ip-adress.com/whois/216.58.220.36, this is google.

Depending the DNS server used, the IP can change, we know that especialy due
to BGP.

In the case the client is an ISP providing internet to smaller ISPs with
different DNS with their end users, here I understand that due to the
ORIGINAL_DST squid will check the headers and if the dns records do not
match so squid will not cache, even with a storeid engine, because too many
different DNS servers in the loop (users -> small ISP -> big ISP -> squid ->
internet), am I right ?

So, the result is a very poor 9% saving where we could expect around 50%
saving. 

Can you plan, for a next build, a workaround to accept the original dns
record from the headers and check dns if and only if the headers do not
contain any dns record ?
I understand Squid should provide some securities but here we should have
the possibility to ON/OFF these securities.
Or do we need to downgrade to Squid 2.7/3.0 ?

ISPs need to cache a lot, security is not their main issue.

Thanks in advance.
Fred




--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672020.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux