The mangle table entry (indicated by ***) is sucking all the cpu. I am running RH7.3 kernel 2.4.18-3 and iptables 1.2.5
This setup has worked well for more than 1000 devices but as the network has grown to 3000+ devices the CPU is not keeping up. I have thought to use IPMARK instead of MARK. Or, to possibly use CLASSIFY. Since this is hard to recreate in the lab I was looking for some experienced advice on the matter.
### root ### tc qdisc add dev eth0 root handle 1: cbq bandwidth 100Mbit avpkt 1000 cell 8
tc qdisc add dev eth1 root handle 1: cbq bandwidth 100Mbit avpkt 1000 cell 8
### Classful qdisc upload/download rate for a group of IP address ###
tc class add dev eth0 parent 1:0 classid 1:11 cbq bandwidth 100Mbit rate 100Mbit weight 54Kbit prio 8 allot 1514 cell 8 maxburst 20 av pkt 1000
tc qdisc add dev eth0 parent 1:11 tbf rate 2048Kbit buffer 10Kb/8 limit 15Kb mtu 1500
tc class add dev eth1 parent 1:0 classid 1:11 cbq bandwidth 100Mbit rate 100Mbit weight 54Kbit prio 8 allot 1514 cell 8 maxburst 20 av pkt 1000
tc qdisc add dev eth1 parent 1:11 tbf rate 2048Kbit buffer 10Kb/8 limit 15Kb mtu 1500
### A single IP address and it's own upload/download rate ###
tc class add dev eth0 parent 1:11 classid 1:2115 cbq bandwidth 100Mbit rate 100Mbit weight 54Kbit prio 8 allot 1514 cell 8 maxburst 20 avpkt 1000
tc qdisc add dev eth0 parent 1:2115 tbf rate 2048Kbit buffer 10Kb/8 limit 15Kb mtu 1500
*** eth0 is MASQUERADE'd so I mark the packet on eth1 ***
*** I have narrowed it down to this one entry sucking all the CPU ***
iptables -t mangle -A PREROUTING -s 10.10.6.20 -i eth1 -j MARK --set-mark 0x843
tc filter add dev eth0 parent 1:0 protocol ip prio 1 handle 2115 fw classid 1:2115
tc class add dev eth1 parent 1:11 classid 1:2115 cbq bandwidth 100Mbit rate 100Mbit weight 54Kbit prio 8 allot 1514 cell 8 maxburst 20 avpkt 1000
tc qdisc add dev eth1 parent 1:2115 tbf rate 2048Kbit buffer 10Kb/8 limit 15Kb mtu 1500
tc filter add dev eth1 parent 1:0 protocol ip prio 1 u32 match ip dst 10.10.6.20 flowid 1:2115