Search squid archive

Re: Serving from the cache when the origin server crashes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Take a careful look at the stale-if-error Cache-control header, as described below:

http://tools.ietf.org/html/draft-nottingham-http-stale-if-error-01

In a nutshell, this allows you to force squid to serve up objects if the origin is down, even if those objects are stale, for a configurable number of seconds after the object's original stale timestamp.

However, you'll still have the overhead of squid attempting to reach the origin, failing, then serving up the stale object on each request - as such, I'd highly recommend making sure that if you use this, you shut down the server in such a way that it generates an ICMP Destination Unreachable reply when squid attempts to connect. If you take the server off the air completely, squid will have to wait for network timeouts before returning the stale content, which your users will notice.

Of course, you'll need to make sure that squid has your site cached in its entirety - it can't retrieve not-cached content from a dead server :)

Amos, can you confirm that 3.x supports this? I'm using it in 2.7.

-C

On Jun 22, 2009, at 9:50 PM, Amos Jeffries wrote:

On Mon, 22 Jun 2009 16:44:34 -0400, Myles Merrell <mmerrell@xxxxxxxxxxxx >
wrote:
Is it possible to configure squid to continue serving from the cache,
even if the originserver has crashed?

We have squid setup using acceleration through a virtual host.  Squid
listens on 80, and our web server works on another server on port 81.
Squid serves the majority of pages through the cache, and when it has to it gets them from the server. We'd like to be able to take the server down periodically, and have the squid cache continue to serve pages in
the cache.

Is this reasonable?, if so, is it possible?


Sort of. Squid does this routinely for all objects which it can cache. The
state of the backend server is irrelevant for HIT traffic.

I'm sure some of those who deal with high-uptime requirements have more to
add on this. These are just the bits I can think of immediately.

For regular usage make sure that sufficiently long expiry and max- age are set so things get cached for as long as possible. Also check the cache_peer monitor* settings are in use. These will greatly reduce minor outages or
load hiccups.

For best affect, the monitor settings and several duplicate parent peers would be recommended. So that when one peer is detected down Squid simply makes requests to the next one. Only the requests in action to the first
peer will experience any error.

The newer the Squid (up to 2.HEAD snapshots) the better the tuning and more options available for this type of usage. Several sponsors have spent a lot
getting 2.7 and 2.HEAD acceleration features added.



For long'ish scheduled outages there are some other settings which can
further reduce impact but take planning to use properly. When an outage is being scheduled ensure the max_stale config option has a reasonable but
longer period than you need for downtime.

Give its some time to grab as much content as possible. You may want to run a sequence of requests for no-so-popular pages that MUST be cached for the duration. Then set the inappropriately named offline_mode in Squid just before dropping the back-end. These will combine to make squid cache as aggressively as possible and not seek external sources unless absolutely
required.


Amos



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux