Search squid archive

Re: Denying write access to the cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



mån 2007-03-26 klockan 11:40 +0200 skrev Guillaume Smet:
> On 3/26/07, Henrik Nordstrom <henrik@xxxxxxxxxxxxxxxxxxx> wrote:
> > One way is to set up a separate set of cache_peer for these robots,
> > using the no-cache cache_peer option to avoid having that traffic
> > cached. Then use cache_peer_access with suitable acls to route the robot
> > requests via these peers and deny them from the other normal set of
> > peers.
> 
> AFAICS, it won't solve the problem as the robots won't be able to
> access the "global" cache readonly.

It does.

Squid processing sequence is kind of:

1. accept request
2. http_access etc
3. cache lookup, send response if a cache hit.
4. if cache miss, look for a cache_peer
5. cache response if allowed

Regards
Henrik

Attachment: signature.asc
Description: Detta =?ISO-8859-1?Q?=E4r?= en digitalt signerad meddelandedel


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux