Hi, I wanted more throughput for my application than I was able to get with one gigabit connection, so we have put in place a bonded interface with two one-gigabit connections agregated into one two-gigabit connection. Unfortunately, with one squid, re-using objects that are small enough to fit into the Linux filesystem cache but large enough to be efficent (a few megabytes each), it maxes out a CPU core at around 140MB/s. This is a dual dual-core AMD Opteron 270 (2Ghz) machine, so it is natural to want to take advantage of another CPU. (This is a 64-bit 2.6.9 Linux kernel and I think I have squeezed about all I am going to out of the software). At first I tried running two squids separately on the two different interfaces (without bonding, 2 separate IP addresses) but that confused the Cisco Service Load Balancer (SLB) we're using to share the load & availability with another machine so I had to drop that idea. For much the same reason, I don't want to use to two different ports. So then the problem is how to distribute the load coming in on the one IP address & port to two different squids. Two different processes can't open the same address & port on Linux, but one process can open a socket and pass it to two forked children. So, I have modified squid2.6STABLE13 to accept a command line option with a file descriptor of an open socket to use instead of opening its own socket. I then wrote a small perl script to open the socket and fork/exec the two squids. This is working and I am now getting around 230MB/s throughput according to the squid SNMP statistics. Does this sound like a reasonable way to do it? Do you have a preferred way to do it? If I cleaned up my patch and submitted it, is there a chance it could get picked up for inclusion in the standard squid distribution so I don't need to maintain it myself? - Dave Dykstra