Hi folks, I'm testing a CARP setup and try to find out what the maximum throughput is. After a lot of testing I'm stuck with 5000 req/s and ca. 220 mbit/s in and out. Whatever I do I can't make it faster. Does anyone have an idea whats going on or if I do something wrong? How can I find out what the bottleneck is? This is the test setup: - 1 GBit NICs and even faster switches, all servers in the same subnet - 1 CARP and 5-9 Proxies as parents - all proxies use AUFS and cache_mem - proxies are warmed up, meaning I ran the test multiple tests so that all content is on the proxies and they don't have to get anything from the original servers - the CARP does not cache anything(cache_mem 0 MB & cache_dir null no-store) - used Version: Squid 2.7.STABLE7-2 - Kernel is the standard Etch 2.6.18 The test itself: I use http_load[1] and start it on four servers at the same time with 50 parallel requests. It uses a list with appr. 3 mio. URLs with most of the files being smaller than 4 kB but bigger than 2 kB. The test runs 30 minutes. The CARP uses appr. 25% CPU and has lots of free memory. I get no problems reported in cache.log or syslog. And the throughput of 5k req/s and 220 mbit/s doesn't change with five, seven or nine parent proxies. [1] http://www.acme.com/software/http_load/http_load-12mar2006.tar.gz Any ideas and hints are welcome. Cheers, Markus