The old setting for cache_swap_high was 95.
A background process monitors the cache usage and
purges old objects. If you retrieve new large files
faster than the background process purges old ones,
you are in trouble.
Marcus
Rich Rauenzahn wrote:
[resending, I accidentally left off the list addr]
If you cache very large files, you may need to change
cache_swap_low 88
cache_swap_high 89
to force the cleanup process to be more aggressive with
removing the oldest cached files
Marcus
I don't see how increasing those values (except as possibly a
temporary bandaid) could fix the problem. To me it looks very clear
that squid's internal accounting of how much space I'm using is
incorrect. If the internal accounting never hits those thresholds,
the files will never be deleted. Lowering them to 50% might fix it,
but instead I've just lowered my max size since it seems to be off by
a factor of 2X or so.
btw, I also tried 3.1.8 --with-large-files (although I'm already using
the 64bit version). Same thing.