> Hi, Hi > I am after a feel of the throughput capabilities for TC and Iptables > in comparison to dedicated hardware. I have heard talk about 1Gb+ > throughput with minimal performance impact using 50ish TC rules and > 100+ Iptables rules. More important than bandwidth is packets per seconds. Calculate your average packet size (measure bandwith and packets in some time window and calculate per second values). It's not the number of rules (tc or firewall) that matter most but thier composition. You should use hashing tc filters when possible and "set" iptables module (instead of many iptables rules) to offload cpu. If you don't need connection tracking (NAT and stuff) - disable it. > Is there anyone here running large throughput / large > configurations, and if so, what sort of figures? You can easily achieve 600k pps on AMD 64 x2 5200 with mean 70% cpu utilization at peek hours. You must bind irqs of nics to different cores (look in /proc/irq/NUM/smp_affinity) to achieve symmetric load of both cores (sometimes its difficult). Similar speed can be achieved with Xeon 3,2GHz with HT (the old one). I havn't tested new Xeons in the network field and I'm curious myself how would they manage. One can put more cores and more nics into the box and achieve even more throuput. Problem is balancing load between the cores. Your setup will be as effective as most used core. I think that using Cisco EtherChannel (and any other bonding/trunking technique that allows round robin traffic distribution between physical links) would allow the ideal distribution of load between cores. Has anyone tried this? cheers, Marek Kierdelewicz _______________________________________________ LARTC mailing list LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc