On Wed, 28 Dec 2011 22:17:43 -0800, someone wrote:
I just had one of my webservers go down, unrelated to my squid server
on
my local lan, but I noticed that after a certain ammount of time,
seems
that once squid "realizes that a host is down" it will then serve the
most recent version of the site from cache, which I think is
EXCELLENT,
i never noticed this before, of course i recently upgraded from
squid3
debian lenny, to squid 3.1 in debian squeeze.
One thing tho, how do I adjust the timing on this, in other words, I
mean, instead of squid taking 2 minutes or so to "realize" the actual
host is down and serve cache, how can I reduce this period? to say
20-30
seconds? Clearly I wouldnt want squid to give up on a host with high
latency, but I would like to know how to fine tune this until I get a
satisfactory timing. Or if its possible at all for that matter.
Detection is a count of failed connection attempts. The connection
process has a timeout of sorts in connect_timeout, but be careful
altering this. In squid up to 3.2 it spans a "connect" process involving
all DNS lookups and TCP handshakes needed for attempting connect against
*all* IPs of a server, too low and it can miss out on testing some
working IPs.
squid-3.1 offers connect-fail-limit=N option to the cache_peer
directive to alter the default 10 failures before DEAD is detected. Any
success on any traffic to the peer will reset the counter and update to
LIVE again.
3.1 does not have limits on how stale things are. That is introduced in
3.2 with support for "Cache-Control: stale-if-error=".
I just think this is a great feature but in order to really make use
of
it when there are connectivity issues, would like to reduce the ttl
on
the cached version served.
NOTE: Some people have encountered trouble getting Squid to detect
recovery when turning off all the ICP/HTCP/ICMP/netdb/digest operations
to a peer. These are the things Squid is watching to turn HTTP to the
peer back on again.
Amos