At 01:17 AM 3/11/2006, Kevin wrote:
On 3/10/06, Mike Leong <leongmzlist@xxxxxxxxx> wrote:
> Hi,
>
> I'm using squid as an accelerator/reverse proxy server lots and lots of
> small files (~20K each)
>
> I used siege ( http://www.joedog.org/siege/ )to benchmark squid and got
> some pretty disappointing results: only 2.5mb/sec throughput.
Could you explain the design decisions behind the unusual RAID0 configuration?
Well, if I do 1 giant raid0 config, and if 1 disk goes bad, entire array (1
terrabyte) is lost. Spreading the risk here.
My first thought is that the siege application is doing something odd.
It appears the "-c" variable doesn't actually set concurrent
connections, as seen by the low concurrency value in the results.
Also,the 3 second "longest transaction" time may indicate a problem
with the network or TCP/IP layer.
One good test would be to configure an Apache or Athttpd server on the
Squid machine to serve up one ore more 20KB static files, and run the
same benchmark against that server directly. If Athttpd performs no
better than Squid, the performance issue is not Squid's.
Another possibility would be to try the same test using a different
tool, for example http_load -parallel 100 -seconds 300 urls.txt
I'll give that a try.
> The squid server and test machine is connected via 100mbps link. During
> the test, the server cpu load is near zero. IO Wait is less than 10%. All
> the requests were TCP_OFFLINE_HITs, according to top, swap is not
> used. From teh benchmarks, it seems max throuput is 2.5mb/sec, kinda low
> for such a power server.
>
> Any comments/ideas?
When I see TCP throughput on a FastEthernet link top out at 25
megabits, the first thing I suspect is a duplex mismatch :)
I get 100mbps speed if I scp a large file from server ->
client. Definitely not cabling/nic issues.
Kevin