Search squid archive

Re: squid 3.5.27 does not respect cache_dir-size but uses 100% of partition and fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13/07/18 04:16, Alex Rousskov wrote:
> On 07/12/2018 05:53 AM, pete dawgg wrote:
> 
> 
>> When there is no traffic squid seems to cleaning up well enough: over
>> night (no traffic) disk usage went down to 30GB (now it's at 50GB
>> again)
> 
> This may be a sign that your Squid cannot keep up with the load. IIRC,
> AUFS uses lazy garbage collection so it is possible for the stream of
> new objects to outpace the stream of object deletion events, resulting
> in a gradually increasing cache size. Using even more aggressive
> cache_swap_high might help, but there is no good configuration solution
> to this UFS problem AFAIK.
> 

FYI, to be more aggressive place the two limits closer together.

I made the removal rate grow in steps of the difference between the
marks. A low of 60 and high of 70 means there are 4 steps of 10 between
60% and 100% full cache - so Squid will be removing 4*200 objects/sec
when the cache is 99.999% full. But a low of 90 and high 91 will remove
10*200 objects/sec at the same full point.

Low numbers like 60, 70 etc are only needed now if you have to push the
removal rate past 2K objects/sec - eg low 60 high 61 will be removing
40*200 = 8K objects/sec.


If you know your peak traffic rate in req/sec you should be able to tune
the purge rate to match that peak traffic rate. The speed traffic
reaches that peak should inform what the gap is between the watermarks.

Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux