On Wed, Nov 3, 2010 at 11:55 AM, Eric Dumazet <eric.dumazet@xxxxxxxxx> wrote: > If this is running on multi processor machine, you could use several NF > queues (one per cpu). I dont know if it would make much of a difference for my application. Either way the kernel is going to be capable of receiving more traffic than the application can handle but my application is not meant to process traffic from high bandwidth connections. Its designed to increase performance of low speed WAN connections. If the virtual machines are able to handle ~20Mb then thats multiple DS1s right there. > Eventually also use RPS if your network card is not multiqueue, to > spread tcp flows to different cpus and different queues. I am doing this currently inside the application. I have read some papers on using multiple queues for sniffing high-speed circuits but my application is doing compression, disk IO and eventually application layer specific processing on the payload of every TCP segment for each session. I dont expect that all to happen at 1Gbps. I would be quite happy if I can get a single system able to handle a DS3 worth of optimized TCP traffic and extremely happy if I can get an OC3 worth. I know without a doubt that my current bottleneck is the host system. Running the iperf tests causes near 100% CPU usage on the host system and the VMs are near ~60% on each of their virtual CPUs. Now that the issue with the queue is resolved I will go back to testing on physical hardware to see what results I get. The final goal is something similar to the commercially avaliable WAN accelerators and none of them offer single system solutions for greater than DS3 connections that I am aware of. I need to determine the number of TCP sessions that a single system with a given ammount of memory for the buffer. This very issue might be why the commercial product vendors ask questions about number of users and active TCP sessions in their planning guides. -- To unsubscribe from this list: send the line "unsubscribe netfilter" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html