Gavin McCullagh wrote:
Hi Amos,
On Sat, 25 Apr 2009, Amos Jeffries wrote:
ipcache_low 90
# ipcache_high 95
ipcache_high 95
cache_mem 1024 MB
# cache_swap_low 90
cache_swap_low 90
# cache_swap_high 95
cache_swap_high 95
For cache >1GB the difference of 5% between high/low can mean long
periods spent garbage-collecting the disk storage. This is a major drag.
You can shrink the gap if you like less disk delay there.
Could you elaborate on this a little? If I understand correctly from the
comments in the template squid.conf:
(swap_usage < cache_swap_low)
-> no cache removal
(cache_swap_low < swap_usage < cache_swap_high)
-> cache removal attempts to maintain (swap_usage == cache_swap_log)
(swap_usage ~> cache_swap_high)
-> cache removal becomes aggressive until (swap_usage == cache_swap_log)
almost. The final one is:
-> aggressive until swap_usage < cache_swap_low
which could be only whats currently indexed (cache_swap_log), or could
be less since aggressive might re-test objects for staleness and discard
to reach its goal.
It seems like you're saying that aggressive removal is a big drag on the
disk so you should hit it early rather than late so the drag is not for
a long period.
Early or late does not seem to matter as much as the MB/GB width of the
low->high gap being removed.
Would it be better to calculate an absolute figure (say
200MB) and work out what percentage of your cache that is? It seems like
the 95% high watermark is probably quite low for large caches too?
I agree. Something like that. AFAICT the high being less than 100% is to
allow X amount of new data to arive and be stored between collection
cycles. 6 GB might be reasonable on a choked-full 100 MB pipe with 5
minute cycles. Or it might not.
The idea if you recall the conditions above, is that aggressive (case
#3) does not occur since that is guaranteed to throw away potential HITs.
I have 2x400GB caches. A 5% gap would leave 20GB to delete aggressively
which might take quite some time alright. A 500MB gap would be 0.125.
cache_swap_low 97.875
cache_swap_high 98
Precisely. Though IMO you probably want a gap measured off your pipe
speed and assuming only 50% of the disk load can be spared for removals.
Can we use floating point numbers here?
Unfortunately not. It's whole integer percentages only here.
I'll look at getting this fixed in 3.1 while the larger improvements
have to wait.
On the bare theory of it I don't see why you can't the same percent in
both settings. That will need some testing though to make sure it does
not create a constant disk load to replace periodic slowness.
Would it make more sense for squid
to offer absolute watermarks (in MB offset from the total size)?
Yes this is one of the ancient aspects remaining in Squid and different
measures may be much better. I'm having a meeting with Alex Rousskov in
approx 5 hours on IRC (#squiddev on irc.freenode.net) to discuss the
general store improvements for 3.2. This is very likely to be one of the
topics.
What I have on my mind are:
* the fixed-bytes gap (100% deterministic load)
* now that you mention it: floating point percentages :)
* throughput-based threshold. Such that there is a buffer of time
between high being passed and disk being full where the collection may
be delayed starting. Then a smaller buffer down to low to allow
reasonable periods between next collection.
* load-based threshold. Such that large-scale collection only occurs
on idle cycles, or amount collected gets truncated into small chunks and
spread over the available time.
* recovery collections. where garbage collection bypasses the usual
pre-emptive mechanism and runs on an emergency basis for a fixed amount
of space (currently needed for active transaction).
I think from looking at the detaile cache.log squid tried to do the
small-chunks method and spread load. But does not go so far as use idle
cycles, which pretty much negates the spreading.
Amos
--
Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE14
Current Beta Squid 3.1.0.7