On 22/06/2012 7:01 a.m., Benjamin Kingston wrote:
I've been having a very difficult getting squid to play nicely with
youtube while logged in. I always get "an error has occured" unless I
log out, then youtube works just fine. I've tried numerous
configuration changes to try to get it to play, and I'm looking into
icap to see if it may help, but I'm sure there's something simple I'm
missing. I'm not really interested in caching dynamic content or even
youtube right now, I just want to at least get squid making direct
requests on behalf of my clients.
Unitl it is clear what the not-mentioned "error" is and why it occured
looking for a solution or workaround is a waste of time.
The last message in access.log is referring to a crossdomain.xml, and
then the requests stop.
Aha. Good hint there. crossdomain.xml is a list of domains and URL
security settings the browser uses to identify whether plugins (ie flash
player) are allowed to make requests for.
* If you or any of the proxies supplying your clients with traffic are
playing URL-rewrite games it will break connectivity. CORS security
protection is designed to prevent the client being shown one URL and the
server receiving another. Use HTTP redirect instead and CORS is not a
problem - the client browser is aware of the URL change and adjust its
CORS details to match.
In my browser (which has caching turned off) I
see "waiting for o-o.prefered.ord...lscache6.c.youtube.com".As I said
earlier, if I logout of of my google account the videos load just
fine, and I have yet to find a site that has a similar issue. I
currently have a rule to force all requests to be direct, but I still
have the same issue
The problem would then appear to be in Google systems. Not your proxy.
However Squid only logs requests once they have finished. Does that URL
it was waiting for show up in the proxy logs if you abort the request in
the browser?
I'm using a WCCPv2 infrastructure, and this is for personal use.
... it is also possible the browser is using non-HTTP connections when
you are logged in. Modern browsers use any of HTTP, HTTPS, SPDY and
WebSockets protocols to fetch web content these days.
Sorry if this made it out to the list already, I wasn't sure if I was
successfully subscribed when I sent it the first time.
Below is my current config:
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly
plugged) machines
tcp_outgoing_address 192.168.254.2
udp_outgoing_address 192.168.254.2
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny requests to certain unsafe ports
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
#include /etc/squid/acl.extended.conf
acl direct dstdomain .
always_direct allow direct
"always_direct allow all" is better. However, all this does is prevent
Squid sending the request through a cache_peer and forces Squid to ass
it to eh DNS (DIRECT) web server for the domain.
You have no cache_peer configured, so it has no use in your config.
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
# And finally deny all other access to this proxy
http_access deny all
# Squid normally listens to port 3128
http_port 192.168.254.2:3080 intercept
# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?
strip_query_terms off
# Uncomment and adjust the following to add a disk cache directory.
cache_dir aufs /mnt/data/squid 9000 16 256
cache_mem 256 MB
maximum_object_size_in_memory 128 KB
# Leave coredumps in the first cache dir
coredump_dir /mnt/data/squid
# WCCP Router IP
wccp2_router 192.168.254.1
# forwarding 1=gre 2=l2
wccp2_forwarding_method 1
# GRE return method gre|l2
wccp2_return_method 1
# Assignment method hash|mask
wccp2_assignment_method hash
# standard web cache, no auth
wccp2_service dynamic 52
wccp2_service_info 52 protocol=tcp priority=240 ports=80
maximum_object_size 700 MB
minimum_object_size 4 KB
half_closed_clients off
quick_abort_min 0 KB
quick_abort_max 0 KB
vary_ignore_expire on
reload_into_ims on
log_fqdn off
memory_pools off
cache_swap_low 98
cache_swap_high 99
max_filedescriptors 65536
fqdncache_size 16384
retry_on_error on
offline_mode off
pipeline_prefetch on
# Add any of your own refresh_pattern entries above these.
#include /etc/squid/refresh.extended.conf
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320