Search squid archive

Re: Denying write access to the cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>> On 26/03/2007 at 10:40, "Guillaume Smet" <guillaume.smet@xxxxxxxxx>
wrote:
> On 3/26/07, Henrik Nordstrom <henrik@xxxxxxxxxxxxxxxxxxx> wrote:
>> One way is to set up a separate set of cache_peer for these robots,
>> using the no-cache cache_peer option to avoid having that traffic
>> cached. Then use cache_peer_access with suitable acls to route the
robot
>> requests via these peers and deny them from the other normal set of
>> peers.
> 
> AFAICS, it won't solve the problem as the robots won't be able to
> access the "global" cache readonly.

As I understand it, the original aim here was to avoid cache pollution
(when a robot wanders in, it effectively resets the last-accessed time
on every object, rendering LRU useless and evicting popular objects
to make space for objects only the robot cares about) - in which case,
would changing the cache_replacement_policy setting not be a better
starting point?

LFUDA should be a close approximation to the result the original
poster
wanted: anything getting hit only by a robot will still not be
'frequently'
used, so although it will be cached initially, it will soon be evicted
again.


James.

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux