Search squid archive

Re: refresh_pattern to (nearly) never delete cached files in a http accelerator scenario?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jamie Plenderleith wrote:
Hi All,

I am using Squid as a HTTP Accelerator/reverse proxy. It is being used to
cache the contents of a site that is being served up from a 1Mbps internet
connection, but the proxy itself is hosted in Rackspace in the US.
Users visit the squid server, and if the item isn't there then it's
retrieved from our offices over the 1Mbps upstream.
I started running wget on another machine on the web to cache the contents
of the site, and the cache on the proxy was growing and growing - but only
to a certain point and then seemed to stop at about 170,000 files.


The earlier reply has the info you are seeking. I'm commenting here because I notice your accelerator config appears to be broken.

The config you show below assumes that all your public pages are accessed by raw-IP address somehow through a machine thats not at that IP. This is not a good setup.

Below is the configuration that we've been using:

http_port 80 accel defaultsite=[our office's static IP]

Use: ... defaultsite=[PUBLIC DOMAIN NAME]

cache_peer [our office's static IP] parent 80 0 no-query originserver
name=myAccel
cache_dir ufs c:/squid/var/cache 20000 16 256
acl our_sites dstdomain [our office's static IP]

Use: acl our_sites dstdomain [PUBLIC DOMAIN NAME(s)]

acl all src 0.0.0.0/0.0.0.0
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all
visible_hostname [hostname of proxy server]
cache_mem 1 GB
maximum_object_size 20000 KB
maximum_object_size_in_memory 1000 KB


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE12
  Current Beta Squid 3.1.0.4

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux