Search squid archive

Re: Re¡G Re: centralized storage for squid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> F5 has some documents on how to implement consisent hashes in bigip
> irules (tcl),
> but i wound up writing a custom one for use in front of our squids
> that only does one
> checksum per request, as opposed to one per squid in the pool, to avoid
> wasting
> cpu cycles on the LB.
>
> it uses a precomputed table for the nodes, but doesn't need to be
> recomputed
> when you add/remove a few, they just fit in between the others. i'll
> try to finish
> the writeup and submit it to devcentral soon.

If its good code, as useful as it seems, and under the GPLv2+ we may be
interested in bundling it with future squid as a helper tool.

Amos

>
> -neil
>
> 2008/3/10 Mark Nottingham <mnot@xxxxxxxxxxxxx>:
>> This is the problem that CARP and other consistent hashing approaches
>>  are supposed to solve. Unfortunately, the Squid in the front will
>>  often be a bottleneck...
>>
>>  Cheers,
>>
>>
>>
>>
>>  On 07/03/2008, at 1:43 PM, Siu Kin LAM wrote:
>>
>>  > Hi Pablo
>>  >
>>  > Actually, it is my case.
>>  > The URL-hash is helpful to reduce the duplicated
>>  > objects. However, once adding/removing squid server,
>>  > load balancer needs to re-calculate the hash of URL
>>  > which cause lot of TCP_MISS in squid server at the
>>  > inital stage.
>>  >
>>  > Do you have same experience ?
>>  >
>>  > Thanks
>>  >
>>  >
>>  > --- Pablo García <malevo@xxxxxxxxx> »¡¡G
>>  >
>>  >> I dealt with the same problem using a load balancer
>>  >> in front of the
>>  >> cache farm, using a URL-HASH algorithm to send the
>>  >> same url to the
>>  >> same cache every time. It works great, and also
>>  >> increases the hit
>>  >> ratio a lot.
>>  >>
>>  >> Regards, Pablo
>>  >>
>>  >> 2008/3/6 Siu Kin LAM <sklam2005@xxxxxxxxxxxx>:
>>  >>> Dear all
>>  >>>
>>  >>> At this moment, I have several squid servers for
>>  >> http
>>  >>> caching. Many duplicated objects have been found
>>  >> in
>>  >>> different servers.  I would minimize to data
>>  >> storage
>>  >>> by installing a large centralized storage and the
>>  >>> squid servers mount to the storage as data disk.
>>  >>>
>>  >>> Have anyone tried this before?
>>  >>>
>>  >>> thanks a lot
>>  >>>
>>  >>>
>>  >>>     Yahoo! ºô¤W¦w¥þ§ð²¤¡A±Ð§A¦p¦ó¨¾½d¶Â«È!
>>  >> ½Ð«e©¹http://hk.promo.yahoo.com/security/index.html
>>  >> ¤F¸Ñ§ó¦h¡C
>>  >>>
>>  >>
>>  >
>>  >
>>  >
>>  >      Yahoo! ºô¤W¦w¥þ§ð²¤¡A±Ð§A¦p¦ó¨¾½d¶Â«È! ½Ð«e©¹http://hk
>>  > .promo.yahoo.com/security/index.html ¤F¸Ñ§ó¦h¡C
>>
>>  --
>>  Mark Nottingham       mnot@xxxxxxxxxxxxx
>>
>>
>>
>



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux