Amos, OK, got your points. What I don't understand is: - The dns records do not match. Squid does the dns request by itself, downloads the object, delivers it to the client and flags with an ORIGINAL_DST, right ? - Same request from another client, same way, it'll be the same object and flagged ORIGINAL_DST too. - Again and again... each time the same fresh object... - Why do we repeat the same action if we deliver the same object each time ? it makes me crazy... Here, I mean by using "*client_dst_passthru off*" and "*host_verify_strict off*". I understand the "host_verify_strict on" must act as you explain, no problem. Squid re-checks the DNS as there is an issue, downloads and delivers the object. If Squid delivers the object it should be able to cache it with the "*client_dst_passthru off*" and "*host_verify_strict off*". I agree Squid must respect the CVE-2009-0801 but you/we should deal nicely with and not just applying it... The right way should be: Squid think the object is OK to be delivered ? Yes: deliver it and cache it. No: block it and don't cache it. See what I mean ? (Sorry to be boring with this topic but it's highly important...). Fred. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672048.html Sent from the Squid - Users mailing list archive at Nabble.com. _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users