Hi.
I've being trying to setup a local squid server on my
home LAN to cache HTTP (not HTTPS) pages. I want to
avoid any client configuration, so I'm aiming for a
transparent proxy - with squid in intercept mode.
In my network setup, the squid server is inside the
LAN together with its clients, and not siting between
the clients and the router/modem like all guides assume.
Furthermore, requests originating from the same machine
where squid is running should be cached as well.
I've setup squid inside a docker container, on a
fedora 24 image. The squid version is 3.5.19. On
squid.conf I've added a new http_port line, for port
8080 with the intercept flag:
http_port 8080 intercept
My router is a Mikrotik router board, so it's trivial
to setup a DNAT rule to redirect all TCP requests to the
squid server. To avoid forward loops, I've marked all
packets originating from squid with DSCP 4 using
iptables rules, and excluded those from the DNAT rule on
the router. I've tested this by running wget requests
from inside the docker container, and those went by
without any redirection.
Now comes the problem:
When any of the redirected requests reach squid, squid
will reply instantly with TCP_MISS/403. Since all traffic
from the squid machine is marked with a specific DSCP,
it's also easy to see squid made no requests to the
outside world before giving that reply. Running tcpdump on
the host machine shows no other packets are being sent
other than the 403 reply.
What's happening? why doesn't squid tries to fetch the
request pages at all?
TL;DR:
When running squid in intercept mode, inside a docker
container, routing traffic to it through dst-nat rules on
a external router, squid will reply with '403 forbidden'
to all requests. Access.log lists TCP_MISS/403, but
tcpdump indicates that squid is never trying to query the
requested page at all.