Search squid archive

Re: Re: Maximum disk cache size per worker

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/22/13 2:43 PM, babajaga wrote:
Your OS assigns workers to incoming connections. Squid does not control
that assignment. For the purposes of designing your storage, you may
assume that the next request goes to a random worker. Thus, each of your
workers must cache large files for files to be reliably cached. <

But, I think such a config SHOULD avoid duplication:

if ${process_number}=1
cache_dir  aufs /cache4/squid/${process_number}         170000 32 256
min-size=31001 max-size=200000
cache_dir  aufs /cache5/squid/${process_number}         170000 32 256
min-size=200001 max-size=400000
cache_dir  aufs /cache6/squid/${process_number}         170000 32 256
min-size=400001 max-size=800000
cache_dir  aufs /cache7/squid/${process_number}         170000 32 256
min-size=800000
endif

Am I wrong ?

I don't know how do I find out if there is duplication cached content. Where can I find out the duplication?

--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Maximum-disk-cache-size-per-worker-tp4659105p4659144.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux