On 14/12/17 16:48, zhongzhe@xxxxxxxxxx wrote:
Hi, Amos
Thanks a lot for your reply. In my conf, squid is not only a
reverse-proxy server, but also a cache server. And that is important to
me.
Sure. Those two functions of proxying are independent of each other.
* Anything going through a caching proxy is cached regardless of how it
arrived or where it will end up in the network.
* The reverse-proxy is all about how traffic is routed when it goes over
network connections.
One thing to be aware of when testing with a Browser is that Browsers
also cache, and they usually get the same instructions as the proxy gets
and the can lead to an illusion in the visible behaviour:
It goes like this:
1) When the browser makes its first request, the object is not stored
in either browser nor proxy caches. So they both MISS and the
transaction fetches from the origin server.
2) when the object eventually becomes stale in the Browser cache is
fetches or revalidates against teh proxy copy.
*** Since the test Browser and the proxy got the content at identical
times it also becomes stale in the proxy cache at the same time it went
stale in the browser cache.
3) the proxy content (now also stale) gets MISS or REFRESH since the
origin server needs to be contacted again.
As you can see the proxy *is* caching between steps #1 and #2. But you
may not ever see it HIT in the testing unless you are careful to clear
the browser cache between every test lookup so that the browser and
proxy cache contents becomes different.
In production this is not usually a problem since multiple users cause
the proxy content to be updated irregularly. But it can still happen in
some situations with low number of clients all with very different
browsing habits.
In fact , it does't normally for my product environment.
10.112.4.54 is the apache server(been agent by squid) , and my request
is also sent by it(such as browser request and mimetic of httpclient
request by java).
10.113.10.191 is the peer squid server, And is closed
now. 10.113.10.190, as you see, is the
current reverse-proxy server.
Whole proceess like this:
10.112.4.54(browser) _send request_//10.113.10.190 _if not cached,
Small mistake there. Browser uses DNS to find out where
yourdomain.example.com IP address is - DNS tells it 10.113.10.190.
The Browser contacts 10.113.10.190 and asks for
http://youlun.lvmama.com/something.
The proxy receives that URL and sees the dstdomain ==
"youlun.lvmama.com" as a reason to send it on to the origin server OR
the cache. Whichever is faster - local RAM and disk by definition being
faster to access than networking to another machine.
NP: the above is important for your requested process. Since it should
now be clear that a reverse-proxy http_access permissions do not need
anything about who the client is (src-IP), or where the domain is hosted
(dst-IP).
forward the requset to apache server_10.112.4.54(apache server)_return
the response page to browser_ 10.112.4.54(browser)
_if cached, return the page_10.112.4.54(browser)
I see. In which case there are more problems with your config than I
initially thought.
> below is my squid.conf
> acl gsrc src 10.112.4.54 10.113.10.191
> acl gdst dst 10.112.4.54 10.113.10.191
> http_access allow gsrc
> http_access allow gdst
>>What is the above supposed to mean?
For your desired process. Do not do the above at all. Remove them
completely.
>
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7 # RFC 4193 local private network range
>
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
>
> acl purge method PURGE
> acl clientServers src 10.112.4.54
> http_access allow purge clientServers
> http_access deny purge
>
> acl gat method GET
> acl clientS src 10.112.4.54 10.113.10.190
> http_access allow gat clientS
> #http_access deny gat
>>The localnet ACL defines 10.*/8 as allowed and your rules below specify
>>that all localnet traffic is allowed.
>>
>>So the above four lines of config seem pointless.
You have configured the machines 10.112.4.54 and 10.113.10.190 as your
cache_peer servers. So why are they listed as "src" ?
In a reverse-proxy "src" is the IP of a client requesting a URL.
"dst" is the destination server - as determined by DNS records for the
URL domain being fetched. In a reverse-proxy those DNS records should
hold the proxies own IP address. So dst-IP is rarely ever useful and are
downright dangerous to make use of in the reverse-proxy.
ICP requset cannot sent to peer server at first. So I add it to try to
solve it .
The above has nothing to do with ICP. It is some ACLs being used to
control *HTTP* request messages arriving into the proxy. It is deciding
whether Squid processes that request *at all*, or rejects the client
with a 403.
For your desired process. Do not do those gat or clientS ACL checks at
all. Remove them completely and the http_access lines they are used in.
>
> acl SSL_ports port 443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl Safe_ports port 3130 # icp
> acl Safe_ports port 3128
> acl CONNECT method CONNECT
>
> http_access deny !Safe_ports
>
> http_access deny CONNECT !SSL_ports
>
> http_access allow localhost manager
> http_access deny manager
>
As the default config file says:
"
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
"
thanks a lot, I'll delete the needless port info.
Port info? I did not mention removing any of that.
Whether you should edit Safe_ports or SSL_ports values is determined by
whether you want the proxy to be exclusively a reverse-proxy (with
caching), or to be both a reverse-proxy AND a forward/explicit proxy
(both with caching).
TO be exclusively a reverse-proxy it is find to remove all by port 80
(and maybe 443) ports from those ACL definitions.
To retain forward-propxy behaviour you do need the default config lines.
And may even have to add extra ports depending on what the Browsers and
clients need access to.
What I was trying to point out was that your custom http_access rules
should all be there. Maybe with the extra custom things like your
cache_peer_* rules.
For your desired process. The only thing you need to have is the
cache_peer* and http_access reverse-proxy lines right here in this
potsition of squid.conf. That is *all* of it.
> http_access allow localnet
> http_access allow localhost
>
> http_port 80 accel defaultsite=youlun.lvmama.com no-vhost
>
> cache_dir aufs /var/spool/squid 8198 16 256
> cache_mem 5120 MB
> cache_swap_low 90
> cache_swap_high 95
> cache_mgr zhongzhe@xxxxxxxxxx
>
> visible_hostname cache190
>
So the domain name Squid announces to your clients is "cache190" as in
http://cache190/ship_front/youlun/1012487.
I think my domain name is youlun.lvmama.com. cache190 is just a
individual name to distinguish with squid server 10.113.10.191.
Fine, but it should in any case be a FQDN which can be publicly resolved
by a lookup in DNS. Current Squid versions should all be able to
auto-detect what their machines unique hostname is. So usually no need
to set these at all.
The visible_hostname (note the word "visible") is what Squid places in
any URLs it has to auto-generate and send to clients. Since this is a
reverse-proxy it is sort of best to set the *public* name to be the
served domain name (eg "visible_hostname youlun.lvmama.com") and use
unique_hostname to set the proxies unique FQDN.
> coredump_dir /var/spool/squid
>
> via off
At least while debugging peering issues set "via on". Only turn it off
if you really have to and *after* you have a fully working proxy hierarchy.
agree with your.
> maximum_object_size 500 KB
>
> icp_port 3130
> icp_access allow all
> icp_query_timeout 2000
>
> cache_peer 10.112.4.54 parent 8090 0 no-query originserver name=youlun
> acl mysites dstdomain youlun.lvmama.com
> http_access allow mysites
> cache_peer_access youlun allow all
> cache_peer_access youlun deny all
The default for cache_peer_access is to allow. No need to specify that
"allow all". What you need to do to allow everything to reach that peer
server is *not* specify "deny all".
Though the normal thing is to use an ACL (eg your "mysites" one) to
allow the domains an origin server is known to supply and to deny other
things. Since it is not even worth trying that peer for things it is not
known to be capable of serving.
So:
http_access allow mysites
cache_peer_access youlun allow mysites
cache_peer_access youlun deny all
Also be aware that all of this peer and http_access config needs to be
located up where it says " INSERT YOUR OWN RULE(S) HERE " etc.
Thanks , I had delete it.
>
>
refresh_pattern -i .*/youlun/([0-9]+) 1440 100% 10080 ignore-no-store ignore-must-revalidate store-stale ignore-reload
>
Why? if your server is not producing correct cacheability headers then
everyone trying to use your site will be having problems. "Fixing" it
for only your proxy by ignoring required things is the worst possible
action to take.
Your proxy is a reverse-proxy (aka CDN), it advertises its Surrogate
abilities to the origin server so your proxy cache can be given custom
values different from the general public. If you need
I want to squid server response a cache page to the request if it's exit .
Squid caches by default the refresh_pattern below indicate 4320 minutes
(1 week) storage time _unless_ the server tells Squid anything more
specific (can be longer or shorter).
That said current Squid cache in memory instead of on-disk unless you
configure a cache_dir line to say where that disk cache should be put.
Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users