Hi together, I'm having trouble figuring a way to configure a `squid` 3.4.10 instance running in a Debian 7 chroot as a transparent caching proxy because I only find configuration examples involving `iptables` which on the one hand is very useful because it allows setups which work independently of environment variables and program settings on the other hand requires access to kernel module loading and unloading. It might be the case that the latter is considered an acceptable tradeoff for the advantages of the former. That leads me to the question whether it is at all possible to configure a `squid` instance to store data of requests from within a local network to the outside (of the local network) in order to avoid to fetch them again and again from the outside (a caching proxy AFAIK, but I'm clearifying in order to avoid confusion). Of course this would require the setup of an environment variable, like `http_proxy` on Linux, or even individual program settings with all the disadvantages of extended maintenance necessity. What would be the case it I had access to `iptables` on the client and not on the machine where `squid` is running. After setting `http_port 3128 transparent` in `squid.conf` I'm getting access denial errors due to forwarding loops 2015/01/06 20:35:34 kid1| WARNING: Forwarding loop detected for: GET / HTTP/1.1 Accept-Encoding: identity Host: google.com User-Agent: Python-urllib/3.4 Via: 1.1 diskstation (squid/3.4.10) X-Forwarded-For: 192.168.178.22 Cache-Control: max-age=259200 Connection: keep-alive but that's just an example to avoid questions like "What have you tried so far". I'm more looking for an abstract, general answer. Best regards, Kalle _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users