I have the same problem, tough I don't remove my squid servers very often.I've partially resolved this problem thanks to the implementation ofthe algorithm of my load balancer.What it does, it's calculate the hash taking into account all thesquids in the pool, whether they're up or down. Then if the algorithmchooses a server that is down, then the calculation happens again. Soif one of my squids restarts, as soon as it starts again, it receivesthe same urls as before using disk cache instead of memory. What you also can try is to link all the squids toghether whith icp tocreate a sibling relationship between them. though I guess thehierarchical cache scenario would help you best to reduce the load inyour web servers. Hope this helps, Regards, Pablo 2008/3/7 Siu Kin LAM <sklam2005@xxxxxxxxxxxx>:> Hi Pablo>> Actually, it is my case.> The URL-hash is helpful to reduce the duplicated> objects. However, once adding/removing squid server,> load balancer needs to re-calculate the hash of URL> which cause lot of TCP_MISS in squid server at the> inital stage.>> Do you have same experience ?>> Thanks>>> --- Pablo Garc燰 <malevo@xxxxxxxxx> 說:>>>> > I dealt with the same problem using a load balancer> > in front of the> > cache farm, using a URL-HASH algorithm to send the> > same url to the> > same cache every time. It works great, and also> > increases the hit> > ratio a lot.> >> > Regards, Pablo> >> > 2008/3/6 Siu Kin LAM <sklam2005@xxxxxxxxxxxx>:> > > Dear all> > >> > > At this moment, I have several squid servers for> > http> > > caching. Many duplicated objects have been found> > in> > > different servers. I would minimize to data> > storage> > > by installing a large centralized storage and the> > > squid servers mount to the storage as data disk.> > >> > > Have anyone tried this before?> > >> > > thanks a lot> > >> > >> > > Yahoo! 網上安全攻略,教你如何防範黑客!> > 請前往http://hk.promo.yahoo.com/security/index.html> > 了解更多。> > >> >>>>> Yahoo! 網上安全攻略,教你如何防範黑客! 請前往http://hk.promo.yahoo.com/security/index.html 了解更多。>