On 08/04/11 14:32, david@xxxxxxx wrote:
sorry for the delay. I got a chance to do some more testing (slightly
different environment on the apache server, so these numbers are a
little lower for the same versions than the last ones I posted)
results when requesting short html page
squid 3.0.STABLE12 4000 requests/sec
squid 3.1.11 1500 requests/sec
squid 3.1.12 1530 requests/sec
squid 3.2.0.5 1 worker 1300 requests/sec
squid 3.2.0.5 2 workers 2050 requests/sec
squid 3.2.0.5 3 workers 2700 requests/sec
squid 3.2.0.5 4 workers 2950 requests/sec
squid 3.2.0.5 5 workers 2900 requests/sec
squid 3.2.0.5 6 workers 2530 requests/sec
squid 3.2.0.6 1 worker 1400 requests/sec
squid 3.2.0.6 2 workers 2050 requests/sec
squid 3.2.0.6 3 workers 2730 requests/sec
squid 3.2.0.6 4 workers 2950 requests/sec
squid 3.2.0.6 5 workers 2830 requests/sec
squid 3.2.0.6 6 workers 2530 requests/sec
squid 3.2.0.6 7 workers 2160 requests/sec instead of all processes being
at 100% several were at 99%
squid 3.2.0.6 8 workers 1950 requests/sec instead of all processes being
at 100% some were as low as 92%
so the new versions are really about the same
moving to large requests cut these numbers by about 1/3, but the squid
processes were not maxing out the CPU
one issue I saw, I had to reduce the number of concurrent connections or
I would have requests time out (3.2 vs earlier versions), on 3.2 I had
to have -c on ab at ~100-150 where I could go significantly higher on
3.1 and 3.0
David Lang
Thank you.
So with small files 2% on 3.1 and ~7% on 3.2 with a single worker. But
under 1% on multiple 3.2 workers.
And overloading/flooding the I/O bandwidth on large files.
NP: when overloading I/O one cannot compare to runs with different
sizes. Only with runs of the same traffic. Also only the CPU max load is
reliable there, since requests/sec bottlenecks behind the I/O.
So... your measure that CPU dropped is a good sign for large files.
Amos
--
Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.12
Beta testers wanted for 3.2.0.6