On 4/4/07, Ben Spencer <ben.spencer@xxxxxxxxx> wrote:
I did some research for an answer to this question, but, things tend to always resort to CPU usage and tuning (though, I did get some good information from those threads also). We have a squid appliance which is very heavy on CPU (which is expected). My question isn't really how can I tune it or why is it using so much CPU, but rather, how well does squid perform on a busy (CPU wise) box?
The exact behavior of a highly utilized cache is going to depend greatly on the scheduler and IO subsystem of the host operating system. In my experience (on Solaris and on BSD), Squid performance tends to degrade quickly when cpu cycles are not available.
I guess another way to ask is, does squid's performance scale linearly as the box (CPU specifically) usage increases or does performance actually degrade/level off once the CPU is approaches 100% utilization? Another question is: once the system is pushed to a maximum (or beyond), are things just slow or should abnormal behavior be expected?
Abnormal behavior should be expected. Specifically, time to service each request does not scale linearly. Instead, as the CPU and IO become saturated, "service time" increases logarithmically as the number of concurrent outstanding sessions skyrockets. It's trivial to deploy multiple parallel servers each running a single Squid instance, so if your "squid appliance" is running out of CPU, maybe it's time to add a couple of cache peers? Kevin