Search squid archive

Re: Re: Maximum disk cache size per worker

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23/03/2013 9:12 p.m., babajaga wrote:
Since all workers get requests for large files, all workers
should cache them or none should. <

Not necessarily.
Defining several AUF-dirs, storing different ranges of object sizes
(size-classes), will keep all workers busy in a high traffic system, like
the one, the starter of the thread is talking about. Because there is a good
probability, that at each instant of time, requests for all size-classes of
objects are active: All workers busy.
During low traffic, the chance for at least 2 simultaneously active requests
for the same size-class is low, so no disadvantage.
Of course, it depends upon a "good" choice of the size-classes. Doing some
statistics before, regarding the size of stored objects, will be necessary.
The starter of the thread proposed 4 AUF-dirs, having a chance to have
"hot-objects" duplicated to all 4 dirs. This will negatively influence the
total hit rate, but improoving the response times, in case these duplicated
objects are so "hot", to be served multiple times in parallel.
In the opposite, using size-classes, this should increase hit-rate, because
much more objects cachable.


You misunderstand the aspect Alex was talking about.

** The config posted splits the classes such that worker #1 caches *only* 0-31KB objects, worker #2 caches *only* 32+ KB.

With traffic of all sizes spread randomly over the two any object going through Squid has a 50% guarantee of being a MISS due simply to being outside the class for the worker handling it. If the latency added by cache lookups is to be worth doing at all you want the HIT ratio to be as large as possible - so forcing a guaranteed MISS on half the traffic is a bad idea.

Amos





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux