Hi Liley, The only tool I used was AB (Apache Bench), you can use it to generate a lot of traffic; albeit from a single source IP. So basically you run AB with the desired number of connections (concurrency) and the maximum number of requests to send. Take note of the AB output as well as monitor your Linux system (SAR stats would be useful, and possibly ntop), then tweak your Linux kernel settings, and run the test again. Compare the output and if after tweaking there is room for improvement (i.e. memory & cpu utilisation as well as response time remain stable) then tweak and test again. The final tests that were run on my box (in which it achieved ~2000 conns per second) was done so using an enterprise grade test solution, the tests were left to run, continually ramping up traffic (i.e. simulating more user connections). The test aimed to total up millions of page views over the course of 3 hours. I think the server actually out-performed these test results when it was put into production, I recollect it handling upwards of 60,000 established connections (and remaining stable). There is no best way to determine what settings to use, other than trial and error, do the testing, tweak and test again and see if the results are moving forward. Also, just on a side note when handling a large amount of traffic like this you need to ensure collapsed_forwarding is enabled, when your content goes stale and you have thousands of users all trying to get it at once you can flood your back end, collapsed_forwarding will ensure only a single request is sent to the back end server. Another side note, you should consider using 'stale-while-revalidate=120' on your refresh_patterns, this means that Squid will serve stale content for 2 minutes; very useful if your back end has come under heavy load. Hope that helps Gareth ----- Follow me on... My Blog Twitter LinkedIn Facebook -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/RPS-tp4480226p4503459.html Sent from the Squid - Users mailing list archive at Nabble.com.