Search squid archive

Re: about the cache and CARP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/08/11 00:47, Carlos Manuel Trepeu Pupo wrote:
2011/8/23 Amos Jeffries<squid3@xxxxxxxxxxxxx>:
On 23/08/11 21:37, Matus UHLAR - fantomas wrote:

On 16.08.11 16:54, Carlos Manuel Trepeu Pupo wrote:

I want to make Common Address Redundancy Protocol or CARP with two
squid 3.0 STABLE10 that I have, but here I found this question:

the CARP that squid supports is the "Cache Array Routing Protocol"
http://en.wikipedia.org/wiki/Cache_Array_Routing_Protocol

- this is something different than "Common Address Redundancy Protocol"
http://en.wikipedia.org/wiki/Common_Address_Redundancy_Protocol

Well, technically Squid supports both. Though we generally don't use the
term CARP to talk about the OS addressing algorithms. HA, LVS or NonStop are
usually mentioned directly.

Thanks for the tips, from now I will be careful with the term.



If the main Squid with 40 GB of cache shutdown for any reason, then
the 2nd squid will start up but without any cache.

There is any way to synchronize the both cache, so when this happen
the 2nd one start with all the cache ?

You would need something that would synchronize squid's caches,
otherwise it would eat two times the bandwidth.

Seconded.

If the second Squid is not running until the event the cache can be safely
mirrored. Though that method will cause a slow DIRTY startup rather than a
fast not-swap. On 40GB it could be very slow, and maybe worse than an empty
cache.

NP: the traffic spike from an empty cache decreases in exponential
proportion to the hit ratio of the traffic. From a spike peak equal to the
internal bandwidth rate.

PS.  I have a feeling you might have some graphs to demonstrate that spike
effect Carlos. Would you be able to share the images and numeric details?
I'm looking for details to update the 2002 documentation.

Thanks to everyone, you guys always helping me !! Now I have a few
problem with Debian and LVM, until I solve it I can't do it anything.
But here another idea:

I put the two squid in cascade and the Master (HA) make the petitions
first to the second squid and if it down go directly to Internet. The
both squid will cache all the contents, so will be duplicate the
contents, but if someone go down, the other one will respond with all
the content cached.

It look like this:

client --->  Server1 --->  Server2 --->  Internet (server1 and server2
will cache all)
Server1 down
client --->  Server2 --->  Internet (server2 will cache all)
Server2 down
client --->  Server1 --->  Internet (server2 will cache all)

What do you think ?

Looks good.

Check your cache_peer directives connect-fail-limit=N values. It affects whether and how much breakage a clients sees when Server2 goes down. If that option is available on your Server1 squid, you want it set relatively low, but not so low that random failures disconnect them.

background-ping option is also useful for recovery once Server2 comes back up.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux