Search squid archive

Re: Re?G Re: [squid-users] centralized storage for squid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 07, 2008, Siu Kin LAM wrote:

> Actually, it is my case.
> The URL-hash is helpful to reduce the duplicated
> objects. However, once adding/removing squid server,
> load balancer needs to re-calculate the hash of URL
> which cause lot of TCP_MISS in squid server at the
> inital stage.
> 
> Do you have same experience ?

This is the sort of stuff that the Cisco implementation of WCCPv2
got "right".

Ie, when a cache dropped out it wouldn't recalculate the hash
immediately. It'd maintain the existing allocations and slowly
move the hash buckets of the failed cache over to new caches.

When the proxy would come back the hash bucket allocation would slowly
revert to how it was.

You should poke your L4 balancer vendor and explain the sitaution. ;)
I'm surprised none of them have done it better.

That said, have you investigated using ICP or cache digests between your
proxies?




Adrian


-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux