On 22/10/2012 3:02 a.m., Matthew Goff wrote:
I've tried searching and didn't see anyone else experiencing this, so
I apologize if someone has. I spent yesterday upgrading my Squid
install to support TPROXY so I can also intercept my IPv6 traffic that
leaves my home via a HE.net tunnelbroker connection.
I worked off both
http://www.squid-cache.org/mail-archive/squid-users/201206/0281.html
and http://wiki.squid-cache.org/Features/IPv6 to where everything
seems to work fine except for certain websites: namely Google.
My network setup is as follows: ISP <-> RTR1 <-> (eth1) Squid (eth0)
<-> RTR2 <-> Clients
I kept everything on a flat subnet for simplicity, as RTR2 is more of
a switch that accepts WiFi connections. The Squid box is a Debian
machine that sits physically in-line as a bridge.
I can watch my access log and see traffic going through the proxy on
both IPv4 and IPv6, and websites loading fine. The only site which
does not behave seems to be Google. The temporary workaround was to
access Google on HTTPS only, since I do not intercept any SSL
connections, but then most of the results Google returns are all to
non-HTTPS redirect pages at Google.com first instead of directly to
the actual website -- so I get timeouts there, instead.
Do you have any info on how far into the system the packets supposedly
going to Google get before the hang? and what happens (or not) to cause
hang?
Software versions: kernel 3.1.0-1-amd64, iptables/ip6tables 1.4.14,
ebtables 2.0.9-2, squid 3.1.2
Please upgrade your Squid. 3.1.2 is very old now and Debian ships with
3.1.20.
NP: the rest of my comments below are just on configuration security and
performance tweaks. Probably not related to your problem.
ebtables config: ebtables -t broute -Lx
Bridge table: broute
Bridge chain: BROUTING, entries: 4, policy: ACCEPT
-p IPv6 -i eth0 --ip6-proto tcp --ip6-dport 80 -j redirect
--redirect-target DROP
-p IPv4 -i eth0 --ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP
-p IPv6 -i eth1 --ip6-proto tcp --ip6-sport 80 -j redirect
--redirect-target DROP
-p IPv4 -i eth1 --ip-proto tcp --ip-sport 80 -j redirect --redirect-target DROP
iptables config: iptables -t mangle -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DIVERT tcp -- anywhere anywhere socket
TPROXY tcp -- anywhere anywhere tcp
dpt:www TPROXY redirect 0.0.0.0:3128 mark 0x1/0x1
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Chain DIVERT (1 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK set 0x1
ACCEPT all -- anywhere anywhere
ip6tables config: ip6tables -t mangle -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DIVERT tcp anywhere anywhere socket
TPROXY tcp anywhere anywhere tcp
dpt:www TPROXY redirect :::3128 mark 0x1/0x1
You are missing security rule to prevent traffic being sent directly
from clients to the Squid http_port used by TPROXY.
That opens a DoS vulnerability where clients can make a single request
to localhost:3128 and your Squid will loop until it dies.
Add this before the DIVERT rule:
-t mangle -A PREROUTING -p tcp --dport 3128 -j REJECT
You need to use a second different port for the traffic going directly
to the proxy (PURGE requests, cache mgr lookups, error page icons, etc
etc.).
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
Chain DIVERT (1 references)
target prot opt source destination
MARK all anywhere anywhere MARK set 0x1
ACCEPT all anywhere anywhere
squid.conf:
acl purge method PURGE # rsync
acl connect method CONNECT # SWAT
acl safe_ports port "/etc/squid3/safe_ports.acl"
acl manager proto cache_object
acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
The above is a squid-2 hack. You can remove it from squid-3 configs.
acl localnet src "/etc/squid3/localnet.acl"
acl children src "/etc/squid3/children.acl"
acl guests src "/etc/squid3/guests.acl"
acl parents src "/etc/squid3/parents.acl"
acl block-dom dstdom_regex -i "/etc/squid3/block.dom"
acl block-kid-dom dstdom_regex -i "/etc/squid3/block-kid.dom"
acl nocache-dom dstdom_regex -i "/etc/squid3/nocache.dom"
acl whitelst-dom dstdom_regex -i "/etc/squid3/whitelst.dom"
acl block-url url_regex -i "/etc/squid3/block.url"
acl block-kid-url url_regex -i "/etc/squid3/block-kid.url"
http_access allow manager localnet
http_access deny manager
http_access allow purge localnet
http_access deny purge
http_access deny !safe_ports
http_access deny connect !safe_ports
Something strange going on with your safe_ports definition.
The ACL we publish called "Safe_Ports" is a list of ports where it is
safe to send HTTP syntax traffic without causing problems.
The ACL we publish upstream called SSL_Ports is a *different* list of
ports where it is relatively safe to send CONNECT tunnels.
The two lists are very different due to the difference in traffic. There
are only a few ports where HTTP syntax could be mixed in with an attack
on the native port (ie SMTP email delivery). But CONNECT permits
arbitrary binary content to be delivered, the unsafe ports there is MUCH
more numerous.
http_access deny parents block-dom
http_access deny parents block-url
http_access deny children block-dom
http_access deny children block-url
http_access deny guests block-dom
http_access deny guests block-url
http_access deny children block-kid-dom
http_access deny children block-kid-url
http_access allow parents whitelst-dom
http_access allow children whitelst-dom
http_access allow guests whitelst-dom
http_access allow localnet
The above looks very weird as well. You have a blacklist being applied
before a whitelist then a catch-all doing ALLOW.
Without knowing the specificis of localnet, parents, children, and
guests. The above would appear to compact down to:
http_access deny block-dom
http_access deny block-url
http_access deny children block-kid-dom
http_access deny children block-kid-url
http_access allow localnet
Normally you have a catch-all policy action in this case "allow
localnet". With a blacklist above it preventing certain things. And a
whitelist above the blacklist to catch some specific cases where the
blacklist catches the wrong things but they can't be fixed in the
blacklist itself.
http_access deny all
cache deny nocache-dom
http_port 3128 tproxy
cache_mem 1024 MB
maximum_object_size_in_memory 4 MB
memory_replacement_policy lru
cache_replacement_policy lru
cache_dir aufs /storage/squid3 5120 16 256
store_dir_select_algorithm least-load
max_open_disk_fds 0
minimum_object_size 0 KB
maximum_object_size 204800 KB
cache_swap_low 90
cache_swap_high 95
access_log /var/log/squid3/access.log squid
acl nolog-port port 443
acl nolog-mgr proto cache_object
acl nolog-dom dstdom_regex -i "/etc/squid3/nolog.dom"
acl nolog-url url_regex -i "/etc/squid3/nolog.url"
log_access deny nolog-port
log_access deny nolog-mgr
The above line should be:
log_access deny manager
And remove the "nolog-mgr" ACL.
log_access deny nolog-dom
log_access deny nolog-url
cache_store_log /var/log/squid3/store.log
log_fqdn on
strip_query_terms off
cache_log /var/log/squid3/cache.log
coredump_dir /storage/squid3
refresh_pattern . 0 20% 4320
quick_abort_pct -1
read_ahead_gap 256 KB
range_offset_limit 0 KB
via off
icp_port 0
htcp_port 0
ICP and HTCP are off by default in Squid-3. Remove the above to lines
from your config.
dns_nameservers 127.0.0.1 ::1
hosts_file /etc/hosts
forwarded_for off
client_db on
coredump_dir /storage/squid3
high_response_time_warning 1000
Is there something I'm missing here? I don't understand why I'm having
issues with only one website. I'd be happy to provide Wireshark info
or anything else needed.
Thanks for any assistance,
Matt Goff