Search squid archive

Re: Re: Re: [squid-users] centralized storage for squid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is the problem that CARP and other consistent hashing approaches are supposed to solve. Unfortunately, the Squid in the front will often be a bottleneck...

Cheers,


On 07/03/2008, at 1:43 PM, Siu Kin LAM wrote:

Hi Pablo

Actually, it is my case.
The URL-hash is helpful to reduce the duplicated
objects. However, once adding/removing squid server,
load balancer needs to re-calculate the hash of URL
which cause lot of TCP_MISS in squid server at the
inital stage.

Do you have same experience ?

Thanks


--- Pablo Garc燰 <malevo@xxxxxxxxx> 說:

I dealt with the same problem using a load balancer
in front of the
cache farm, using a URL-HASH algorithm to send the
same url to the
same cache every time. It works great, and also
increases the hit
ratio a lot.

Regards, Pablo

2008/3/6 Siu Kin LAM <sklam2005@xxxxxxxxxxxx>:
Dear all

At this moment, I have several squid servers for
http
caching. Many duplicated objects have been found
in
different servers.  I would minimize to data
storage
by installing a large centralized storage and the
squid servers mount to the storage as data disk.

Have anyone tried this before?

thanks a lot


    Yahoo! 網上安全攻略,教你如何防範黑客!
請前往http://hk.promo.yahoo.com/security/index.html
了解更多。





Yahoo! 網上安全攻略,教你如何防範黑客! 請前往http://hk .promo.yahoo.com/security/index.html 了解更多。

--
Mark Nottingham       mnot@xxxxxxxxxxxxx




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux